id
stringlengths
40
40
text
stringlengths
9
86.7k
metadata
stringlengths
3k
16.2k
source
stringclasses
1 value
added
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
created
stringdate
2024-11-21 00:00:00
2024-12-12 00:00:00
21a00499b4660a5cccc4e92e567e2f845c76d0a6
LECTURE 2: SUPPORT FOR CORRECTNESS IN CONCURRENCY Intro to Concurrent Processing • Recap on Threads and Processes. • Basic models of correctness in concurrency. • Software Solutions to Mutual Exclusion. – Dekker’s Algorithm. – Mutual Exclusion for n processes: The Bakery Algorithm. • Higher level supports for Mutual Exclusion: – Semaphores & Monitors – Emulating Semaphores with Monitors & Vice Versa • Solution of Classical Problems of Synchronization: – The Readers-Writers Problem – The Dining Philosophers problem in SR; – The Sleeping Barber Problem; Introduction to Threads - Basic idea: build *virtual* processors in software, on top of *physical* processors: - **Processor**: - gives set of instructions (with ability to automatically run a series of them). - **Thread**: - minimal s/w processor in whose context can execute some instructions. - saving thread context ⇒ stopping current execution & saving all data needed to execute later. - **Process**: - s/w processor in whose context one or more threads may be executed. - executing a thread means executing a series of instructions in it’s context. Context Switching: - **Processor context:** - minimal collection of values stored in processor registers to run some instructions, e.g., stack pointer, addressing registers, program counter. - **Thread context:** - minimal collection of values stored in registers & memory, to run some instructions, i.e., processor context, state. - **Process context:** - minimal collection of values stored in registers & memory, used to execute a thread, i.e., thread context, but now also at least MMU register values. - **Observations:** - threads share same address space ⇒ thread context switching happens entirely without OS; process switching is generally more expensive OS must get involved. - creating & destroying threads is much cheaper than doing so for processes. Threads/Processes Recap (/3) Threads and Operating Systems: — **Main issue:** - should OS *kernel* provide threads, or implement them as *user-level* packages? — **User-space solution:** - single process handles all operations ⇒ implementations can be very efficient. - all services provided by kernel are done on behalf of process thread lives in ⇒ if kernel blocks a thread, entire process blocks. - use threads for lots of external events & threads block on a per-event basis ⇒ if kernel can’t tell them apart, how can signalling events happen? — **Kernel solution:** - kernel should contain thread package implementation ⇒ all operations (creation, synchronisation) return as system calls - operations that block a thread are no longer a problem: kernel schedules another available thread within same process. - handling external events is simple: kernel schedules event’s thread. - big problem: efficiency loss due to each thread operation needs trap to kernel. Threads/Processes Recap (/4) Threads and Operating Systems (cont’d): – Conclusion: • Try to mix user-level and kernel-level threads We’ll return to *thread pool* abstraction when looking at Java • For now, need to ensure that threads do not interfere with each other • This brings us on to the topic of *Concurrent Correctness* A Model of Concurrent Programming • Can define a concurrent program as the interleaving of sets of sequential atomic instructions. – i.e. a set of interacting sequential processes, execute at the same time, on the same or different processors. – processes are said to be interleaved, i.e. at any given time each processor is executing one of the instructions of the sequential processes. – relative rate at which instructions of each process execute is not important. • Each sequential process consists of a series of atomic instructions. • Atomic instruction is an instruction that once it starts, proceeds to completion without interruption. • Different processors have different atomic instructions, and this can have a big effect. A First Attempt to Define Correctness P1: load reg, N P2: load reg, N P1: add reg, #1 P2: add reg, #1 P1: store reg, N P2: store reg, N • If processor includes instructions like INC then this program will be correct no matter which instruction is executed first. • If all arithmetic must be performed in registers then the following interleaving does not produce the desired results. • This dependency on unforeseen circumstances is known as a Race Condition • A concurrent program must be correct under all possible interleavings. Correctness: A More Formal Definition • If $P(\tilde{a})$ is property of input (pre condition), and $Q(\tilde{a}, \tilde{b})$ is a property of input & output (post condition), then correctness is defined as: • Partial correctness: $P(\tilde{a}) \land \text{Terminates}\{\text{Prog}(\tilde{a}, \tilde{b})\} \Rightarrow Q(\tilde{a}, \tilde{b})$ • Total correctness: $P(\tilde{a}) \Rightarrow [\text{Terminates}\{\text{Prog}(\tilde{a}, \tilde{b})\} \land Q(\tilde{a}, \tilde{b})]$ • Totally correct programs terminate. A totally correct specification of the incrementing tasks is: $a \in \mathbb{N} \Rightarrow [\text{Terminates}\{\text{INC}(a, a)\} \land a = a + 1]$ Types of Correctness Properties There are 2 types of correctness properties: 1. **Safety properties** - **Mutual exclusion** - Two processes must not interleave certain sequences of instructions. - **Absence of deadlock** - Deadlock is when a non-terminating system cannot respond to any signal. 2. **Liveness properties** - **Absence of starvation** - Information sent is delivered. - **Fairness** - That any contention must be resolved. Correctness: Fairness - **Weak Fairness** If a process continuously makes a request, eventually it will be granted. - **Strong Fairness** If a process makes a request infinitely often, eventually it will be granted. - **Linear waiting** If a process makes a request, it will be granted before any other process is granted the request more than once. - **FIFO** If a process makes a request, it will be granted before any other process makes a later request. Mutual Exclusion • As seen, a concurrent program must be correct in all allowable interleavings. • So there must be some sections of the different processes which cannot be allowed to be interleaved. • These are called critical sections. • We will attempt to solve the mutual exclusion problem using software first before more sophisticated solutions. ``` // Pseudo Code showing a critical section shared by different processes while (true) // Non_Critical_Section // Pre_protocol // Critical_Section // Post_protocol end while ``` This solution satisfies mutual exclusion. ✔ Cannot deadlock, as both \texttt{p,q} would have to loop on \texttt{turn} test infinitely and fail. \begin{itemize} \item Implies \texttt{turn==1} and \texttt{turn==2} at the same time. \end{itemize} No starvation: requires one task to execute its CS infinitely often as other task remains in its pre-protocol. Can fail in absence of contention: if \texttt{p} halts in CS, \texttt{q} will always fail in pre-protocol. Even if \texttt{p, q} guaranteed not to halt, both are forced to execute at the same rate. This, in general, is not acceptable. Software Solutions to Mutual Exclusion Problem # 2 • The first attempt failed because both processes shared the same variable. • The Second Solution unfortunately violates the mutual exclusion requirement. • To prove this only need to find one interleaving allowing p & q into their CS at same time. • Starting from the initial state, we have: p checks \texttt{wantq} and finds \texttt{wantq}=0. q checks \texttt{wantp} and finds \texttt{wantp}= 0. p sets \texttt{wantp} = 1. q sets \texttt{wantq} = 1. p enters its critical section. q enters its critical section. QED Software Solutions to Mutual Exclusion Problem # 3 Problem with #2 is once pre-protocol loop is completed can’t stop process from entering CS So the pre-protocol loop should be considered as part of the critical section. We can prove that the mutual exclusion property is valid. To do this we need to prove that the following equations are invariants: \[ \text{wantp}=1 \equiv at(c_1) \lor at(d_1) \lor at(e_1) \quad \text{Eqn(1)} \] \[ \text{wantq}=1 \equiv at(c_2) \lor at(d_2) \lor at(e_2) \quad \text{Eqn(2)} \] \[ \neg \{at(d_1) \land at(d_2)\} \quad \text{Eqn(3)} \] (here \( at(x) \Rightarrow x \) is the next instruction to be executed in that process.) Software Solutions # 3 (cont’d) • Eqn (1) is initially true: – Only the $b_1 \rightarrow c_1$ and $e_1 \rightarrow a_1$ transitions can affect its truth. – But each of these transitions also changes the value of $\text{wantp}$. • A similar proof is true for Eqn (2). • Eqn 3 is initially true, and – can only be negated by a $c_2 \rightarrow d_2$ transition while $at(d_1)$ is true. – But by Eqn (1), $at(d_1) \Rightarrow \text{wantp} = 1$, so $c_2 \rightarrow d_2$ cannot occur since this requires $\text{wantp} = 0$. Similar proof for process q. • But there’s a problem with deadlock, if the program executes one instruction from each process alternately: p assigns 1 to $\text{wantp}$. p tests $\text{wantq}$ & remains in its do loop q assigns 1 to $\text{wantq}$ q tests $\text{wantp}$ & remains in its do loop Result Deadlock! Software Solutions to Mutual Exclusion Problem # 4 • Problem with third proposed solution was that once a process indicated its intention to enter its CS, it also **insisted** on entering its CS. • Need some way for a process to relinquish its attempt if it fails to gain immediate access to its CS, and try again. Software Solutions to Mutual Exclusion Problem # 4 This proposal has two drawbacks: 1. A process can be starved. Can find interleavings where a process can never enter its critical section. 2. The program can *livelock*. This is a form of deadlock. In *deadlock* there is no possible interleaving which allows processes to enter their CS. In *livelock*, some interleavings succeed, but there are sequences which do not succeed. Software Solutions # 4 (cont’d) Proof of Failure of Attempt 4: 1. By Starvation *p* sets *wantp* to 1. *p* completes a full cycle: - Checks *wantq* Enters CS - Resets *wantp* Does non-CS - Sets *wantp* to 1 *q* sets *wantq* to 1 *q* checks *wantp*, sees *wantp*=1 & resets *wantq* to 0 *q* sets *wantq* to 1 and back 2. By Livelock *p* sets *wantp* to 1. *p* tests *wantq*, remains in its do loop *p* resets *wantp* to 0 to relinquish attempt to enter CS *p* sets *wantp* to 1 *q* sets *wantq* to 1 *q* tests *wantp*, remains in its do loop *q* resets *wantq* to 0 to relinquish attempt to enter CS *q* sets *wantq* to 1 etc Dekker’s Algorithm • A combination of the first and fourth proposals: – First proposal explicitly passed right to enter CSs between the processes, – whereas fourth proposal had its own variable to prevent problems in absence of contention. • In Dekker’s algorithm right to insist on entering a CS is explicitly passed between processes. Dekker’s Algorithm (cont’d) ```c /* Copyright © 2006 M. Ben-Ari. */ int wantp = 0; int wantq = 0; int turn = 1; void p() { while (1) { cout << "p non-CS \n"; wantp = 1; while (wantq == 1) { wantp = 0; while (!((turn == 1))); wantp = 1; } cout << "p CS\n"; turn = 2; wantp = 0; } } void q() { while (1) { cout << "q non-CS\n"; wantq = 1; while (wantp == 1) { wantq = 0; while (!((turn == 2))); wantq = 1; } cout << "q CS\n"; turn = 1; wantq = 0; } } main() { /* As before */ } ``` Mutual Exclusion for n Processes: The Bakery Algorithm • Dekker’s Algorithm solves mutual exclusion problem for 2 processes. • For $N$ process mutual exclusion problem, there are many algorithms; all complicated and relatively slow to other methods. • The Bakery Algorithm is one where processes take a numbered ticket (whose value constantly increases) when it wants to enter its CS. • The process with the lowest current ticket gets to enter its CS. • This algorithm is not practical because: – ticket numbers will be unbounded if some process is always in its critical section, and – even in the absence of contention it is very inefficient as each process must query the other processes for their ticket number. /* Copyright (C) 2006 M. Ben-Ari. */ const int NODES = 3; int num[NODES]; int choose[NODES]; int Max() { int Current = 0; int i; for (i=0; i <NODES; i++) if (num[i] > Current) Current = num[i]; return Current; } void p(int i) { int j; while (1) { cout << "proc " << i << " non-CS\n"; choose[i] = 1; num[i]= 1 + Max(); choose[i] = 0; for (j=0; j <NODES; j++) { if (j != i) { while (!choose[j]); } } cout << "process " << i << " CS\n"; num[i]=0; } } main() { int j; for (j=0; j <NODES; j++) number[j]=0; for (j=0; j <NODES; j++) choosing[j]=0; cout << "process " << i << " CS\n"; cobegin { p(0); p(1); p(2); // 3 processes here } } Mutual Exclusion for N Processes: The Bakery Algorithm (cont’d) HIGHER LEVEL SUPPORT FOR MUTUAL EXCLUSION: SEMAPHORES & MONITORS Semaphores - A more general synchronization mechanism - Operations: \textit{P} (wait) and \textit{V} (signal) - \textit{P}(S) - If semaphore variable \textit{S} is nonzero, decrements \textit{S} and returns - Else, suspends the process - \textit{V}(S) - If there are processes blocked for \textit{S}, restarts exactly one of them - Else, increments \textit{S} by 1 - The following invariants are true for semaphores: \[ S \geq 0 \quad (1) \] \[ S = S_0 + \#V - \#P \quad (2) \] where \( S_0 \) is initial value of Semaphore \( S \) Semaphores for Mutual Exclusion • With semaphores, guaranteeing mutual exclusion for \( N \) processes is trivial ```c semaphore mutex = 1; void P(int i) { while (1) { // Non Critical Section Bit P(mutex) // grab the mutual exclusion semaphore // Do the Critical Section Bit // V(mutex) // grab the mutual exclusion semaphore } } int main() { cobegin { P(1); P(2); } } ``` Semaphores: Proof of Mutual Exclusion - **Theorem** Mutual Exclusion is satisfied - **Proof:** Let $\#CS$ be the number of processes in their CS - We need to prove that $mutex + \#CS = 1$ is an invariant. Eqn (1): $\#CS = \#P - \#V$ (from the program structure) Eqn (2): $mutex = 1 - \#P + \#V$ (semaphore invariant) Eqn (3): $mutex = 1 - \#CS$ (from (1) and (2)) $\Rightarrow$ $mutex + \#CS = 1$ (from (2) and (3)) QED Semaphores: Proof of No Deadlock **Theorem** The program cannot deadlock **Proof:** - Requires all processes to be suspended on their $P(\text{mutex})$ operations. - Then $\text{mutex} = 0$ and $\#CS = 0$ as no process is in its critical section. - The critical section invariant just proven is $\text{mutex} + \#CS = 1$ \[ \Rightarrow 0 + 0 = 1 \] which is clearly impossible. Types of Semaphores - Defined above is a general semaphore. A *binary semaphore* is a semaphore that can only take the values 0 and 1. - Choice of which suspended process to wake gives the following definitions: - *Blocked-set semaphore* Awakens any one of the suspended processes. - *Blocked-queue semaphore* Suspended processes are kept in FIFO & are awakened in order of suspension. - *Busy-wait semaphore* Semaphore value is tested in a busy wait loop, with the test being atomic. There may be interleavings between loop cycles. Types of Semaphores: Proofs • **Theorem** With busy-wait semaphores, starvation is possible. • **Proof:** Consider the following execution sequence for 2 processes. 1. P(1) executes $P(\text{mutex})$ and enters its critical section. 2. P(2) executes $P(\text{mutex})$, finds $\text{mutex}=0$ and loops. 3. P(1) finishes CS, executes $V(\text{mutex})$, loops back, executes $P(\text{mutex})$ and enters its CS. 4. P(2) tests $P(\text{mutex})$, finds $\text{mutex}=0$, and loops. Types of Semaphores: Proofs (/2) 1. **Theorem** With blocked-queue semaphores, starvation is impossible. • **Proof:** - If P(1) is blocked on \texttt{mutex} there will be at most N-2 processes ahead of P(1) in the queue. - Therefore after N-2 \texttt{V(mutex)} P1 will enter its critical section. 2. **Theorem** With blocked-set semaphores, starvation is possible for N\geq 3. • **Proof:** - For 3 processes it is possible to construct an execution sequence such that there are always 2 processes blocked on a semaphore. - \texttt{V(mutex)} is required to only wake one of them, so it could always ignore one and leave that process starved. Ye Classicale Problemes of Synchronization 1. Ye Probleme of Ye Producers & Consumers 2. Ye Probleme of Ye Readers & Writers 3. Ye Probleme of Ye Dining Philosophers The Producer-Consumer Problem This type of problem has two types of processes: **Producers** processes that, due to some internal activity, produce data to be sent to consumers. **Consumers** processes that on receipt of a data element consume data in some internal computation. - Could join processes synchronously, such that data is only transmitted when producer is ready to send it & consumer is ready to receive it. - More flexible to connect producers/consumers by a buffer (ie a queue) - For an infinite buffer, the following invariants hold for the buffer: \[ \#\text{elements} \geq 0 \] \[ \#\text{elements} = 0 + \text{in\_pointer} - \text{out\_pointer} \] - These invariants are exactly the same as the semaphore invariants with a semaphore called *elements* and an initial value 0. A GRAPHIC EXAMPLE OF THE PRODUCER/CONSUMER PROBLEM 1. PRODUCER CONSUMER 2. DUFFER (Inside # of Mugs) 3. PROBLEM - Consumer takes from buffer before producer is done adding to it - trouble! This is solved by ______________________ (Fill in the blank) 4. One-way betting The consumer must wait for producer to produce before it can consume... 5. Boundary buffer If the consumer is busy (can't consume), the producer must wait, if the buffer is full, for the consumer to start consuming again. The process is now ______________________. The Producer-Consumer Problem (cont’d) ```c /* Copyright (C) Wikipedia */ /* Assumes various procedures e.g. P,V */ int in_pointer = 0, out_pointer = 0; semaphore elements = 0; // items produced semaphore spaces = N; // spaces left void producer( int i) { while (1) { item = produceItem(); P(spaces); putItemIntoBuffer(item); in_pointer:=(in_pointer+1) mod N; V(elements); } } void consumer( int i) { while (1) { P(elements); item = removeItemFromBuffer(); out_pointer:=(out_pointer+1)mod N V(spaces); consumeItem(item); } } int main ( ) { cobegin { producer(1); producer (2); consumer (1); consumer (2); consumer (3); } } ``` • This demonstrates the case of a real, bounded circular buffer used to count empty places/spaces in the buffer. • As an exercise prove the following: (i) No deadlock, (ii) No starvation & (iii) No data removal/appending from an empty/full buffer respectively The Dining Philosophers Problem - An institution hires five philosophers to solve a difficult problem. - Each philosopher only engages in two activities - thinking & eating. - Meals are taken in the dining room which has a table set with five plates & five forks (or five bowls and five chopsticks). - In the centre of the table is a bowl of spaghetti that is endlessly replenished. - The philosophers, not being very dextrous, require two forks to eat; - Philosopher may only pick up the forks immediately to his left right. Dining Philosophers (cont’d) • For this system to operate correctly it is required that: 1. A philosopher eats only if he has two forks. 2. No two philosophers can hold the same fork simultaneously. 3. There can be no deadlock. 4. There can be no individual starvation. 5. There must be efficient behaviour under the absence of contention. • This problem is a generalisation of multiple processes accessing a set of shared resources; – e.g. a network of computers accessing a bank of printers. Dining Philosophers: First Attempted Solution • Model each fork as a semaphore. • Then each philosopher must wait (execute a P operation) on both the left and right forks before eating. ```plaintext semaphore fork [5] := ((5) 1) /* pseudo-code for attempt one */ /* fork is array of semaphores all initialised to have value 1 */ process philosopher (i := 0 to 4) { while (1) { Think ( ); P(fork (i)); // grab fork[i] P(fork ((i+1) mod 5)); // grab rh fork Eat ( ); V(fork (i)); // release fork[i] V(fork ((i+1) mod 5)); // release rh fork } } ``` Dining Philosophers: Solution #1 • This is called a symmetric solution since each task is identical. • Symmetric solutions have advantages, e.g. for load-balancing. • Can prove no 2 philosophers hold same fork as \texttt{Eat()} is fork’s CS. – If \( \#P_i \) is number of philosophers holding fork \( i \) then \( \text{Fork}(i) + \#P_i = 1 \) (ie either philosopher is holding the fork or sem is 1) • Since a semaphore is non-negative then \( \#P_i \leq 1 \). • However, system can deadlock (i.e none can eat) when all philosophers pick up their left forks together; – i.e. all processes execute \( P(\text{fork}[i]) \) before \( P(\text{fork}[(i+1)\mod 5]) \) • Two solutions: – Make one philosopher take a right fork first (asymmetric solution); – Only allow four philosophers into the room at any one time. Dining Philosophers#2: Symmetric Solution /* pseudo-code for room solution to dining philosophers */ /* fork is array of semaphores all initialised to have value 1 */ semaphore Room := 4 semaphore fork (5) := ((5) 1) process philosopher (i := 0 to 4) { while (1) { Think ( ); // thinking not a CS! P (Room); P(fork (i)); P(fork (((i+1) mod 5)); Eat ( ) // eating is the CS V(fork (i)); V(fork (((i+1) mod 5)); V (Room); } } • This solution solves the deadlock problem. • It is also symmetric (i.e. all processes execute same code). Dining Philosophers: Symmetric Solution (cont’d) Proof of No Starvation **Theorem** Individual starvation cannot occur. *Proof:* - For a process to starve it must be forever blocked on one of three semaphores, $\text{Room, fork } [i]$ or $\text{fork } [(i+1) \mod 5]$. a) **Room** semaphore - If semaphore is blocked-queue type then process $i$ is blocked only if $\text{Room}$ is 0 indefinitely. - Needs other 4 philosophers to block on their left forks, as one will finish (if gets 2 forks), put down forks & signal $\text{Room}$ ($V(\text{Room})$) - So this case will follow from the $\text{fork}[i]$ case. Dining Philosophers: Symmetric Solution (cont’d) Proof of No Starvation b) \texttt{fork[i]} semaphore – If philosopher \(i\) is blocked on his left fork, then philosopher \(i-1\) must be holding his right fork. – Therefore he is eating or signalling he is finished with his left fork, – So will eventually release his right fork (i.e., philosopher \(i\)’s left fork). \[c) \texttt{fork[i+1] mod 5} \text{ semaphore}\] – If philosopher \(i\) is blocked on his right fork, this means that philosopher \((i+1)\) has taken his left fork and never released it. – Since eating and signalling cannot block, philosopher \((i+1)\) must be waiting for his right fork, – and so must all the others by induction: \(i+j, 0 \leq i \leq 4\). – But with \texttt{Room} semaphore invariant only 4 can be in the room, – So philosopher \(i\) cannot be blocked on his right fork. The Readers-Writers Problem - Two kinds of processes, readers and writers, share a DB. - Readers execute transactions that examine the DB, writers execute transactions that examine/update the DB. - Given that the DB is initially consistent, then to ensure that it remains so, a writer process must have exclusive access. - Any number of readers may concurrently examine the DB. - Obviously, for a writer process, updating the DB is a CS that cannot be interleaved with any other process. The Readers-Writers Problem (cont’d) ```c int M := 20; int N := 5; int nr := 0; sem mutexR := 1; sem rw := 1 process reader (i := 1 to M) { while (1) { P (mutexR); nr := nr + 1; if nr = 1 P (rw); end if V (mutexR); Read_Database ( ); P (mutexR); nr := nr - 1; if nr = 0 V (rw) end if V (mutexR); } } process writer(i := 1 to N) { while (1) { P (rw); Update_Database ( ); V (rw); } } ``` • Called *readers’ preference* solution as if some reader accesses DB and reader + writer arrive at their entry protocols then readers always have preference over writers. Readers-Writers: Ballhausen’s Solution • The Readers Preference Solution is not a fair one as it always gives readers precedence over writers. • So a continual stream of readers will block any writer process from updating the database. • Ballhausen’s solution aims to tackle this: – The idea behind the solution is one of efficiency: one reader takes up the same space as all readers reading together. – A semaphore `access` is used for readers gaining entry to DB, with a value initially equalling the total number of readers. – Every time a reader accesses the DB, the value of `access` is decremented and when one leaves, it is incremented. – A writer wants to enter DB, occupies all space step by step by waiting for all old readers to leave and blocking entry to new ones. – The writer uses a semaphore `mutex` to prevent deadlock between two writers trying to occupy half available space each. Readers-Writers: Ballhausen’s Solution (cont’d) sem mutex = 1; sem access = m; void reader ( int i ) { while (1) P(access); // ... reading ... V(access); // other operations } // ... writing ... void writer ( int i ) { while (1) { P(mutex); for k = 1 to m { P(access); } // ... writing ... for k = 1 to m { V(access); } // other operations V(mutex); } } int main ( ) { cobegin reader (1); reader (2); reader (3); writer (1); writer (2); } Lecture 2: Concurrent Correctness Support Monitors • Main issue to semaphores is they’re low level programming construct – If one coder forgets to do $V()$ operation on a semaphore after a CS, then the whole system can deadlock. • What is required is a higher level construct that groups the responsibility for correctness into a few modules. • Monitors are such a construct. These are an extension of the monolithic monitor found in OS for allocating memory etc. – They *encapsulate* a set of procedures, and the data they operate on, into single modules (*monitors*) – They guarantee that only one process can execute a procedure in the monitor at any given time (mutual exclusion). – Of course different processes can execute procedures from different monitors at the same time. Monitors (cont’d): Condition Variables • Synchronisation is achieved by using *condition variables*, data structures that have 3 operations defined for them: *wait* \((C)\) The process that called the monitor containing this operation is suspended in a FIFO queue associated with \(C\). Mutual exclusion on the monitor is released. *signal* \((C)\) If the queue associated with \(C\) is non-empty, wake the process at the head of the queue. *non-empty* \((C)\) Returns true if the queue associated with \(C\) is non-empty. • Note the difference between the *P* in semaphores and *wait* \((C)\) in monitors: latter always delays until *signal* \((C)\) is called, former only if the semaphore variable is zero. Monitors (cont’d): Signal & Continue • If a monitor guarantees mutual exclusion: – A process uses the \textit{signal} operation – Thus awakens another process suspended in the monitor, – So aren’t there 2 processes in same monitor at same time? – Yes. • To solve this, several signalling mechanisms can be implemented, the simplest is \textit{signal & continue mechanism}. – Under these rules the process in the monitor that signals a condition variable is allowed to continue to completion, – So the \textit{signal} operation should be at the end of the procedure. – Process suspended on condition variable, but now awake, is scheduled for \textit{immediate resumption}, – After exit from monitor of process that signalled condition variable. /* Copyright (C) 2006 M. Ben-Ari */ monitor RW { int NR = 0, NW = 0; condition OK2Rd, OK2Wr; void StartRead() { if (NW || !empty(OK2Wr)) waitc(OK2Rd); NR := NR + 1; signalc(OK2Rd); } void EndRead() { NR := NR - 1; if (NR == 0) signalc(OK2Wr); } void StartWrite() { if (NW || (NR! = 0)) waitc(OK2Wr); NW = 1; } } void EndWrite() { NW = 0; if (empty(OK2Rd)) signalc(OK2Wr); else signalc(OK2Rd); } void Reader(int N) { int i; for (i = 1; i < 10; i++) { StartRead(); cout << N << "reading" << '\n'; EndRead(); } } void Writer(int N) { int i; for (i = 1; i < 10; i++) { StartWrite(); cout << N << "writing" << '\n'; EndWrite(); } } void main() { cobegin { Reader(1); Reader(2); Reader(3); Writer(1); Writer (2); } } File rw_control.c Emulating Semaphores Using Monitors - Semaphores/monitors are concurrent programming primitives of equal power: Monitors are just a higher level construct. ```c /* Copyright (C) 2006 M. Ben-Ari. */ monitor monsemaphore { int semvalue = 1; condition notbusy; void monp() { if (semvalue == 0) waitc(notbusy); else semvalue = semvalue - 1; } void monv() { if (empty(notbusy)) semvalue = semvalue + 1; else signalc(notbusy); } } int n; void inc(int i) { monp(); n = n + 1; monv(); } main() { cobegin { inc(1); inc(2); } cout << n; } ``` Emulating Monitors Using Semaphores - Need to implement *signal and continue* mechanism. - Do this with - a variable `c_count` - one semaphore, `s`, to ensure mutual exclusion - & another, `c_semaphore`, to act as the condition variable. - **wait** translates as: ``` c_count := c_count + 1; V(s); P(c_semaphore); // wait always suspends c_count := c_count - 1; // 1 less process in monitor ``` - **& signal** as: ``` if ( c_count > 0 ) V(c_semaphore) // only signal if waiting processes else V(s) // admit another process ``` Dining Philosophers Using Monitors ```c monitor (fork_mon) /* Assumes: wait( ), signal( )*/ /* and condition variables */ int fork := ((5) 2); condition (ok2eat, 5) /* array of condition variables */ void (take_fork (i)) { if ( fork (i) != 2 ) waitc (ok2eat(i)); fork ((i-1) mod 5):= fork((i-1) mod 5)-1; fork ((i+1) mod 5) := fork((i+1) mod 5)-1; } void release_fork (i) { fork ((i-1) mod 5):= fork((i-1) mod 5)+1; fork ((i+1) mod 5) := fork((i+1) mod 5)+1; } if ( fork((i+1)mod 5) ==2 ) signalc(ok2eat((i+1)mod 5)); //rh phil can eat if ( fork ((i-1)mod 5) == 2 ) signalc(ok2eat((i-1)mod 5)); //lh phil can eat void philo ( int i ) { while (1) { Think ( ); take_fork (i); Eat ( ); release_fork (i); } } void main( ) { cobegin { philo(1); philo(2); philo(3); philo(4); philo(5); } } ``` Dining Philosophers: Proof of No Deadlock **Theorem** Solution Doesn’t Deadlock **Proof:** - Let $\# E =$ number of eating philosophers, $\Rightarrow$ have taken both forks. - Then following invariants are true from the program: \[ \text{Non-empty}(\text{ok2eat}[i]) \Rightarrow \text{fork}[i] < 2 \quad \text{eqn (1)} \] \[ \sum_{i=1}^{5} \text{fork}[i] = 10 - 2(\# E) \quad \text{eqn (2)} \] - Deadlock implies $\# E = 0$ and all philosophers are enqueued on \text{ok2eat} and none are eating: - If they are all enqueued then (1) implies $\sum \text{fork}[i] \leq 10$ - If no philosopher is eating, then (2) implies $\sum \text{fork}[i] \leq 5$. - Contradiction implies that the solution does not deadlock. - But individual starvation can occur. How? How to avoid? Monitors: The Sleeping Barber Problem (cont’d) - The barber and customers are interacting processes, - The barber shop is the monitor in which they interact. Monitors: The Sleeping Barber Problem • A small barber shop has two doors, an entrance and an exit. • Inside, barber spends all his life serving customers, one at a time. 1. When there are none in the shop, he sleeps in his chair. 2. If a customer arrives and finds the barber asleep: – he awakens the barber, – sits in the customer’s chair and sleeps while hair is being cut. 3. If a customer arrives and the barber is busy cutting hair, – the customer goes asleep in one of the two waiting chairs. 4. When the barber finishes cutting a customer’s hair, – he awakens the customer and holds the exit door open for him. 5. If there are waiting customers, – he awakens one and waits for the customer to sit in the barber’s chair, – otherwise he sleeps. Monitors: The Sleeping Barber Problem (cont’d) • Use three counters to synchronize the participants: – barber, chair and open (all initialised to zero) • Variables alternate between zero and unity: 1. barber==1 the barber is ready to get another customer 2. chair==1 customer sitting on chair but no cutting yet 3. open==1 exit is open but customer not gone yet, • The following are the synchronization conditions: – Customer waits until barber is available – Customer remains in chair until barber opens it – Barber waits until customer occupies chair – Barber waits until customer leaves void (get_haircut()) { do waitc(barber_available) while (barber==0) barber := barber - 1; chair := chair + 1; signalc (chair_occupied); do waitc (door_open) while (open==0) open := open - 1; signalc (customer_left); } // called by customer void (get_next_customer( )) { barber := barber + 1; signalc(barber_available); do waitc(chair_occupied) while (chair == 0) chair := chair -1; } // called by barber void (finished_cut( )) { open := open + 1; signalc (door_open); do waitc(customer_left) while (open==0) } // called by barber Sleeping Barber Using Monitors (cont’d) ```c void customer ( i ) { while (1) { get_haircut ( ); // let it grow } } void barber ( i ) { while (1) { get_next_customer ( ); // cut hair finished_cut ( ) } } int main ( ) { cobegin { barber (1); barber (2); customer (1); customer (2); } } ``` Sleeping Barber Using Monitors (cont’d) • For the Barbershop, the monitor provides an environment for the customers and barber to rendezvous • There are four synchronisation conditions: – Customers must wait for barber to be available to get a haircut – Customers have to wait for barber to open door for them – Barber needs to wait for customers to arrive – Barber needs to wait for customer to leave • Processes – wait on conditions using `wait()`s in loops – `signal()` at points when conditions are true Summary • Can define a concurrent program as the interleaving of sets of sequential atomic instructions. • Ensuring correctness of concurrent programs is tough even for two process systems as need to ensure both Safety & Liveness properties. • Semaphores & Monitors facilitate synchronization among processes. • Monitors are higher level but can emulate either one by other. • Both have been used to simulate classical synchronization Problems: – Producers & Consumers – Readers & Writers – Dining Philosophers
{"Source-Url": "http://www.computing.dcu.ie/~mcrane/CA4006/CA4006%20Lecture%202%20Concurrent%20Correctness.pdf", "len_cl100k_base": 9808, "olmocr-version": "0.1.48", "pdf-total-pages": 61, "total-fallback-pages": 0, "total-input-tokens": 103151, "total-output-tokens": 12721, "length": "2e13", "weborganizer": {"__label__adult": 0.0003654956817626953, "__label__art_design": 0.0003235340118408203, "__label__crime_law": 0.00033855438232421875, "__label__education_jobs": 0.0004901885986328125, "__label__entertainment": 6.395578384399414e-05, "__label__fashion_beauty": 0.00012695789337158203, "__label__finance_business": 0.00010353326797485352, "__label__food_dining": 0.0004780292510986328, "__label__games": 0.0008983612060546875, "__label__hardware": 0.0012769699096679688, "__label__health": 0.0004050731658935547, "__label__history": 0.00022041797637939453, "__label__home_hobbies": 0.0001105666160583496, "__label__industrial": 0.0004193782806396485, "__label__literature": 0.00022208690643310547, "__label__politics": 0.0002608299255371094, "__label__religion": 0.0005736351013183594, "__label__science_tech": 0.01155853271484375, "__label__social_life": 8.624792098999023e-05, "__label__software": 0.0032634735107421875, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.0003736019134521485, "__label__transportation": 0.0005893707275390625, "__label__travel": 0.0002034902572631836}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36648, 0.01481]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36648, 0.53816]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36648, 0.83925]], "google_gemma-3-12b-it_contains_pii": [[0, 50, false], [50, 573, null], [573, 1156, null], [1156, 1933, null], [1933, 2920, null], [2920, 3259, null], [3259, 4002, null], [4002, 4536, null], [4536, 5232, null], [5232, 5708, null], [5708, 6167, null], [6167, 6717, null], [6717, 7322, null], [7322, 7907, null], [7907, 8572, null], [8572, 9429, null], [9429, 9746, null], [9746, 10191, null], [10191, 10884, null], [10884, 11227, null], [11227, 11900, null], [11900, 12621, null], [12621, 13524, null], [13524, 13589, null], [13589, 14148, null], [14148, 14581, null], [14581, 15018, null], [15018, 15404, null], [15404, 15945, null], [15945, 16435, null], [16435, 17106, null], [17106, 17273, null], [17273, 18082, null], [18082, 18628, null], [18628, 19641, null], [19641, 20168, null], [20168, 20676, null], [20676, 21289, null], [21289, 22113, null], [22113, 22734, null], [22734, 23366, null], [23366, 24236, null], [24236, 24725, null], [24725, 25344, null], [25344, 26257, null], [26257, 26884, null], [26884, 27635, null], [27635, 28357, null], [28357, 29119, null], [29119, 30085, null], [30085, 30770, null], [30770, 31338, null], [31338, 32261, null], [32261, 33054, null], [33054, 33213, null], [33213, 33983, null], [33983, 34592, null], [34592, 35235, null], [35235, 35606, null], [35606, 36127, null], [36127, 36648, null]], "google_gemma-3-12b-it_is_public_document": [[0, 50, true], [50, 573, null], [573, 1156, null], [1156, 1933, null], [1933, 2920, null], [2920, 3259, null], [3259, 4002, null], [4002, 4536, null], [4536, 5232, null], [5232, 5708, null], [5708, 6167, null], [6167, 6717, null], [6717, 7322, null], [7322, 7907, null], [7907, 8572, null], [8572, 9429, null], [9429, 9746, null], [9746, 10191, null], [10191, 10884, null], [10884, 11227, null], [11227, 11900, null], [11900, 12621, null], [12621, 13524, null], [13524, 13589, null], [13589, 14148, null], [14148, 14581, null], [14581, 15018, null], [15018, 15404, null], [15404, 15945, null], [15945, 16435, null], [16435, 17106, null], [17106, 17273, null], [17273, 18082, null], [18082, 18628, null], [18628, 19641, null], [19641, 20168, null], [20168, 20676, null], [20676, 21289, null], [21289, 22113, null], [22113, 22734, null], [22734, 23366, null], [23366, 24236, null], [24236, 24725, null], [24725, 25344, null], [25344, 26257, null], [26257, 26884, null], [26884, 27635, null], [27635, 28357, null], [28357, 29119, null], [29119, 30085, null], [30085, 30770, null], [30770, 31338, null], [31338, 32261, null], [32261, 33054, null], [33054, 33213, null], [33213, 33983, null], [33983, 34592, null], [34592, 35235, null], [35235, 35606, null], [35606, 36127, null], [36127, 36648, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36648, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36648, null]], "pdf_page_numbers": [[0, 50, 1], [50, 573, 2], [573, 1156, 3], [1156, 1933, 4], [1933, 2920, 5], [2920, 3259, 6], [3259, 4002, 7], [4002, 4536, 8], [4536, 5232, 9], [5232, 5708, 10], [5708, 6167, 11], [6167, 6717, 12], [6717, 7322, 13], [7322, 7907, 14], [7907, 8572, 15], [8572, 9429, 16], [9429, 9746, 17], [9746, 10191, 18], [10191, 10884, 19], [10884, 11227, 20], [11227, 11900, 21], [11900, 12621, 22], [12621, 13524, 23], [13524, 13589, 24], [13589, 14148, 25], [14148, 14581, 26], [14581, 15018, 27], [15018, 15404, 28], [15404, 15945, 29], [15945, 16435, 30], [16435, 17106, 31], [17106, 17273, 32], [17273, 18082, 33], [18082, 18628, 34], [18628, 19641, 35], [19641, 20168, 36], [20168, 20676, 37], [20676, 21289, 38], [21289, 22113, 39], [22113, 22734, 40], [22734, 23366, 41], [23366, 24236, 42], [24236, 24725, 43], [24725, 25344, 44], [25344, 26257, 45], [26257, 26884, 46], [26884, 27635, 47], [27635, 28357, 48], [28357, 29119, 49], [29119, 30085, 50], [30085, 30770, 51], [30770, 31338, 52], [31338, 32261, 53], [32261, 33054, 54], [33054, 33213, 55], [33213, 33983, 56], [33983, 34592, 57], [34592, 35235, 58], [35235, 35606, 59], [35606, 36127, 60], [36127, 36648, 61]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36648, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
4261599851024b3414828052ec309c5383f68112
Abstract Interpreters: a Monadic Approach to Modular Verification (DRAFT) Sébastien Michelland, Yannick Zakowski, Laure Gonnord To cite this version: Sébastien Michelland, Yannick Zakowski, Laure Gonnord. Abstract Interpreters: a Monadic Approach to Modular Verification (DRAFT). 2024. hal-04385725 HAL Id: hal-04385725 https://inria.hal.science/hal-04385725 Preprint submitted on 11 Jan 2024 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Distributed under a Creative Commons Attribution 4.0 International License Abstract We argue that monadic interpreters built as layers of interpretations stacked atop the free monad constitute a promising way to implement and verify abstract interpreters in dependently-typed theories such as the one underlying the Coq proof assistant. The approach enables modular proofs of soundness of the resulting interpreters. We provide generic abstract control flow combinators proven correct once and for all against their concrete counterpart. We demonstrate how to relate concrete handlers implementing effects to abstract variants of these handlers, essentially capturing the traditional soundness of transfer functions in the context of monadic interpreters. Finally, we provide generic results to lift soundness statements via the interpretation of stateful and failure effects. We formalize all the aforementioned combinators and theories in Coq and demonstrate their benefits by implementing and proving correct two illustrative abstract interpreters for a structured imperative language and a toy assembly. 1 Introduction The realm of mechanized verification of programming languages has reached a staggering degree of maturity. Backing up meta-theoretical results with a formalization in a proof assistant has become increasingly routine in the programming language research community [32]. But such formalization efforts have not only become more common, they have grown in scale and ambition: large-scale software is verified against faithful semantics of existing industrial-strength languages [13, 17, 23, 24]. When it comes to formalized proofs, details of representation matter greatly. Propositionally specified transition systems are by and large the most popular: typically, the small-step semantics is specified through proof rules, using a binary relation between dynamic configurations, before considering its transitive closure. While extremely successful, such approaches have drawbacks. On the practical side, these semantics are non-executable at their core, hence requiring some significant extra work to support crucial practice such as differential testing against industrial reference interpreters. In reaction, frameworks such as Skeletal Semantics [2] or the K framework [34] have been designed notably in order to support the automatic derivation of executable interpreters from the formal semantics. On the theoretical side, they tend to lack support for equational reasoning, and often give up on compositionality—recursive definition on the syntax—and modularity—independent definition and combination of the features of the language. These shortcomings become increasingly painful when formal developments scale. In contrast, when applicable, monads and subsequently algebraic effects have long been recognized as an appealing approach to modeling the semantics of effectful programs. The monad laws, extended with algebraic domain-specific equations capturing the semantics of the effects at hand, yield powerful reasoning principles. Monads have been both a pen-and-paper theoretical tool and a practical programming paradigm for decades, but have also become increasingly popular in the mechanized realm. In particular, free monads [37] have been at the root of flexible, general-purpose reasoning frameworks. Variations on this idea have appeared throughout the literature, for instance as the program monad in the FreeSpec project [26], as I/O-trees [14], and as McBride’s general monad [27]. In this paper, we focus on interaction trees [39] (ITrees), a recent realization of this approach as a Coq library. ITrees are defined as a coinductive variant of the freer monad [21] and are also closely related to resumption monads [31]. The library provide rich reusable components to model and reason about effectful, recursive, interactive computations, while supporting extraction. In particular, they make the definition of denotational semantics for first order languages with first order effects straightforward. ITrees have been applied in a wide range of projects, such as modeling network servers [22, 41], transactional objects [25], concurrency [4], or non-interference [36]. Their largest application is arguably the Vellvm project [40, 42], providing a compositional, modular and executable semantics for a large sequential subset of LLVM’s intermediate representation. This application leverages the approach’s modularity heavily, structuring the semantics into a series of layers, each plugging in an independent implementation of a feature of the language. In the present work, we seek to offer similar benefits of modularity and reusable components for writing verified static analyses against ITree-based formal semantics. We place ourselves more specifically in the abstract interpretation framework [6, 7]. Abstract interpretation is well known for providing rich ways of combining abstractions, through products [5] or communication-based protocols [8, 16]. In this paper, we do not focus our attention on such construction of rich abstract domains. Rather, we follow the big-step abstract interpreter line of works [2, 9, 18, 20] in seeking to provide rich reusable combinators to lighten the construction of verified abstract interpreters. Our contributions can be crystallized as follows: - we identify aflow, an extensible monad for monadically programming abstract interpreters in Coq; - we capture a composable notion of soundness suitable for expressing the correctness of partially interpreted monadic interpreters; - we define and certify a collection of flow combinators; - we demonstrate our library by proving correct two abstract interpreters for a structured imperative language and for a control-flow graph language; - we emphasize that most of the proof effort is internalized in the library: components and their proofs of correctness are reused. All of our results are formalized in Coq and provided as an open source library. Section 2 starts by providing necessary background on ITrees and abstract interpretation. Section 3 illustrates the challenges and motivates our design, whose programmatic component is described in more detail in Section 4. It is exemplified in Section 5, describing our case studies. Finally, Section 6 provides details on the meta-theory provided by our library, and the structure of a proof of soundness of an abstract interpreter from the perspective of a user of our library. We conclude with related work. 2 Background Typographic remarks. For clarity and conciseness, we take some light liberties with Coq code included in this paper. When clear from context, we omit implicit arguments. We use mathematical notations in lieu of traditional identifiers. Furthermore, we present simplified versions of the code such as specialized definitions where the artifact is parametrized, or Fixpoint instead of Equations. We hope it will create no confusion, and systematically reference the accompanying with hyperlinks symbolized by 📖. We make use of functions between type families, writing $E \rightsquigarrow F : = \forall (X), \ E X \rightarrow F X$ for such a function between $E, F : \text{Type} \rightarrow \text{Type}$. We write $\mathbb{I}$ and $\emptyset$ for the unit type and its inhabitant. 2.1 Interaction Trees and Monadic Interpreters Interaction Trees [39] (ITrees) have emerged in the Coq ecosystem as a rich toolbox for building compositional and modular monadic interpreters for first order languages. The library also provides an equational theory for reasoning about equivalence and refinement of computations. Through this section, we introduce the programmatic side of this framework. All of our results are formalized in Coq and provided as an open source library. Section 2 starts by providing necessary background on ITrees and abstract interpretation. Section 3 illustrates the challenges and motivates our design, whose programmatic component is described in more detail in Section 4. It is exemplified in Section 5, describing our case studies. Finally, Section 6 provides details on the meta-theory provided by our library, and the structure of a proof of soundness of an abstract interpreter from the perspective of a user of our library. We conclude with related work. Figure 1. ITrees: type signature of the main combinators \[ \text{ITree} : \text{Type} \rightarrow \text{Type} \rightarrow \text{Type} \] \[ \begin{align*} \text{ITree} & \text{.ret} (\nu : R) : \text{itree} \ E \ R \\ \text{ITree} & \text{.bind} (\omega : \text{itree} \ E \ T) (k : T \rightarrow \text{itree} \ E \ U) : \text{itree} \ E \ U \end{align*} \] \[ \begin{align*} \text{ITree} & \text{.trigger} : \text{E} \rightarrow \text{itree} \ E \\ \text{ITree} & \text{.iter} (\text{body} : I \rightarrow \text{itree} \ E (I+R)) : I \rightarrow \text{itree} \ E \ R \end{align*} \] The datatype takes two parameters: a signature $E$ that specifies the set of interactions the computation may have with the environment, and the type $R$ of values that it may return. ITree computations can be thought of as trees built out of three constructors. Leaves, via the Ret constructor, model pure computations, carrying $R$ values. Vis nodes model an effect $e$ being performed, before yielding to the continuation $k$ with the value resulting from $e$. Finally, ITrees are defined coinductively, allowing them to model diverging computations as non-well-founded trees. Accordingly, the Tau constructor represents a non-observable internal step that occurs, much as in Capretta’s delay monad [3]. One may think of ITrees as a low level imperative programming language embedded inside of Gallina. The library exposes the primitive combinators shown in Figure 1. ITrees have a monadic structure: pure computations can be embedded via ret, and computations can be sequenced with the traditional bind construct. A minimal effectful computation can be written ITree.trigger e, yielding control to the environment to perform an effect $e$ and returning the result. By virtue of their coinductive nature, ITrees form what is sometimes referred to as a completely iterative monad [1]. From the eye of the programmer, this captures the ability to write fixpoints using the iter combinator. Operationally, iter $f$ i is the computation performing $f$ i, checking whether the result is a new accumulator $\text{inr r}$ and continuing with iter $f$ j, or if it is a final value $\text{inl r}$ and returning $r$. \footnote{We use $x \leftarrow c ;$ as a notation for bind c (fun x => k x).} To make things concrete, we turn to our main running example: a traditional Imp language [30] whose abstract syntax is depicted on Figure 2. Arithmetic expressions contain variables in $\mathbb{V}$, literals in $\mathbb{V}$ and binary operations. Statements include the usual assignments, sequencing, conditionals and loops, as well as an assert statement acting as a no-op if the condition is valid, as failure otherwise. We model Imp’s dynamic semantics using ITrees. The process, already illustrated in [39], and at scale notably in Velvm [40], is split into two main phases. **Representation.** First, we represent the abstract syntax into an interaction tree: one can think of it as a coinductive representation of the labeled transition system denoting the program. We hence collect the labels, that is the effects that the program may perform: 3 ``` Variant arithE : Type → Type := | Compute (op : Op) (l r : V) : arithE V. Variant memE : Type → Type := | Read (a : X) : memE V | Write (a : X) (v : V) : memE 1. Variant assertE : Type → Type := | Assert (v : V) : assertE 1. ``` The interface specifies for each event its arguments and its variables in $\mathbb{X}$, literals in $\mathbb{V}$ and binary operations. Statements include the usual assignments, sequencing, conditionals and loops, as well as an assert statement acting as a no-op if the condition is valid, as failure otherwise. ``` Interpretation. By representing Imp’s abstract syntax as ITrees, we have given a semantics to its control flow, but its effects remain purely syntactic. We now provide handlers ``` for each category of effects, implementing them through an appropriate monad transformer, as shown on Figure 4. ``` The arithmetic operations we consider here are pure, hence $h_{\text{arith}}$ does not introduce any transformer; it only relies on a pure implementation compute_binop omitted here. Memory interactions are stateful, which we implement with the traditional state transformer over a concrete map $\text{mem}$ providing $\text{mem}_{\text{store}}$ and $\text{mem}_{\text{get}}$ operations (the latter returning $\theta$ by default). Finally, asserts may fail, hence $h_{\text{arith}}$ introduces failure via the usual $\text{failT}$ transformer. We are finally ready to define Imp’s denotation, eval, by successively interpreting all three layers of effects. Each interpretation removes an event family from the signature and adds a monad transformer. The resulting semantic domain is hence $\text{failT} (\text{stateT} S (\text{itree } 0))$ (where $\theta$ is the empty signature), i.e. a stateful computation that may diverge or end in a state of error. ``` Getting there requires two final ingredients. First, the $\text{hoist}$ monadic combinator lifts a monad morphism $f : m \sim n$ under a transformer $t : \text{hoist} f : t m \sim t n$. Second, ITrees’s $\text{interp}$ function lifts an implementation of events as a handler to a whole tree: $\text{interp} (h : E \sim M) : \text{itree } E \sim M$. Putting all the ingredients together: ``` ``` ``` 2.2 Abstract Interpretation Abstract interpretation [7] provides a simple and elegant way to compute sound approximations of a program’s semantics, by mimicking the concrete evaluation of the program in an abstract fashion. The analysis defines an over-approximation of the set of states and control flow of the concrete program, trading accuracy in exchange for guaranteed termination. An abstract domain defines approximations of program objects (values); for simplicity in this paper we consider non-relational numerical domains. To further exemplify, we shall consider the Interval domain, which abstracts sets of numerical values $V \subseteq \mathbb{Z}$ by $V^\# \subseteq \text{Interval}^4$, where $\text{Interval} = (\mathbb{Z} \cup -\infty) \times (\mathbb{Z} \cup +\infty)$. We use the standard formalization of domains as lattices equipped with union (join, $\sqcup$), minimal and maximal elements ($\bot$, $\top$), and a decidable order denoted by $\subseteq$. A pair of abstraction and concretization functions $(a, g)$ forming a Galois connection is expected to relate the abstract domain to the concrete one, although we follow Pichardie [29’s $\gamma$-only encoding as summed up Figure 5. To ensure termination during the analysis of loops, abstract domains come with a widening operator equipped with a well-founded measure over vectors of naturals. 4The $\text{interp}$ function is parametric in $M$. However, to avoid ambiguity in the remaining, we postfix its name by the effect introduced by the handler. Definition compute_binop (op : Op) (l r : V) : V := fun ' (Compute op l r) ⇒ ret (compute_binop op l r). Definition h_arith ' (Monad M) : arithE → M := fun ' (Compute op l r) ⇒ ret (compute_binop op l r). Definition h_assert ' (Monad M) : assertE → failT M := fun ' (Assert v) ⇒ ret (if v =? 0 then None else (Some tt)). Definition h_mem ' (Monad M) : memE → stateT M := fun e m ⇒ match e with | Read a ⇒ ret (m, mem_get m a) | Write a v ⇒ ret (mem_store m a v, tt) end. Figure 4. Imp: effect handlers Beyond the fact that computations operate over abstract values and stores, we use Imp to informally highlight, in a big-step style, how uncommon the control flow of the resulting interpreter is, a crucial difficulty for the framework we develop in the following sections. Conditions must run both branches “in parallel”, from the same initial memory, and join the results.\(^5\) \[ \llbracket \text{if } b \text{ then } c_1 \text{ else } c_2 \rrbracket^m = \llbracket c_1 \rrbracket^m \mathbin{\uplus} \llbracket c_2 \rrbracket^m. \] Loops \(\textbf{while } b \textbf{ do } c\) could naively perform an unbounded number of (abstract) iterations.\(^6\) Termination is hence ensured by the usage of the widening operator, which converges due to its well-founded measure: \[ \llbracket \textbf{while } b \textbf{ do } c \rrbracket^m = \text{repeat } m^\# \leftarrow \llbracket c \rrbracket^m \mathbin{\uplus} \llbracket c \rrbracket^m \\ \text{if } (m^\# \subseteq m^\#) \Rightarrow \text{return } m^\# \] That is, \(\llbracket \textbf{while } b \textbf{ do } c \rrbracket^m\) is the least fixpoint of iterating the loop body with widening, applied on \(m^\#\). From these ingredients (replacing computations with abstract domain operations and control flow with abstract transformers), the abstract interpretation framework \(^7\) guarantees that the computation of the abstract semantics always terminates and is safe, in the sense that the concretization of the obtained semantics is always larger than the (usually untractable) concrete semantics. 3 Design of a layered abstract interpreter We are now ready to consider the contribution of this paper: designing a monadic abstract interpreter built in a modular fashion and resulting in a static analysis proven correct against the concrete semantics defined in Section 2.1. This section focuses on how surface-level requirements influence the design of the abstract interpreter. We build up to Figure 6, providing a bird’s eye view of the interpreter instantiated on a simple Imp program, to prepare for formal details in Sections 4 and 6. Perhaps the most striking feature of the approach is that we build a hybrid abstract program (Figure 6, 1), generated from the source program while also embedding key components of an abstract interpreter, such as lattices and a fixpoint approximation scheme. It is similar to an abstract interpreter partially evaluated on a chosen input program. As a result, the abstract program exhibits behaviors from both its source and these generic abstract interpretation components. Interpreting before unfolding control flow. As an immediate consequence of this duality, consider the C-like source expression (condition ? (true-value) : (false-value)), which evaluates to true-value when the condition is true, and false-value otherwise. Assuming that the condition is not statically determined, the abstract program computes an approximation of both options using the lattice’s join operation. However, a particular order must be chosen, e.g., the true branch first. Thus, it is tempting to denote the abstract program as the following ITree: \[ t \leftarrow \llbracket \text{true-value} \rrbracket^m,\llbracket \text{false-value} \rrbracket^m;\llbracket \text{true-value} \rrbracket^m,\llbracket \text{false-value} \rrbracket^m \] But unfolding the conditional choice in this way leads to an issue incompatible with the modular construction we seek: a sequence point between the computations of \(t\) and \(f\) is introduced. Since the semantics of “sequence” change with the current monad, this hypothetical abstract program denoted as an ITree ceases to abstract the behavior of the concrete program once we interpret its effects. For instance, introducing into the state monad w.r.t. a handler \( h \) yields \[ s \mapsto (s', t) \leftarrow \text{interp}_h s \frac{\text{true-value}}{\text{false-value}} s' ; \] which incorrectly uses the final state \( s' \) of the \textit{true} branch as the initial state of the \textit{false} branch, instead of the original state \( s \). This results from a confusion between source-level sequence (where we want to introduce new monadic effects by interpretation) and an internal sequenced computation inside an abstract interpretation algorithm (where we don’t). More generally, we need to be able to differentiate control flow components. Since an ITree cannot capture this difference, we use a different structure where source control flow remains symbolic during interpretation, and unfold it to an ITree only once all events are gone. The abstract control flow monad. We first denote the program into an inductive freer monad with symbolic control flow operations dubbed \texttt{aflow} (formally defined in Section 4.1). In this form, abstract programs can be seen as a tree of control flow combinators such as \texttt{abstract cond 2} and \texttt{abstract sequence 3}, with atomic computations as leaves. We maintain this structure throughout monadic interpretation by applying monad transformers and interpreting events without changing the combinator tree. Once all events are interpreted, we unfold control flow combinators, collapsing the abstract program into a pure ITree computation 4. Preservation by interpretation. Our goal of retaining the \texttt{aflow} structure when we switch monads raises the question of control flow combinators being “preserved by interpretation”. We show in Section 4.2 and 4.3 that we can do so syntactically, in that the interpretation of any abstract combinator, e.g. \texttt{abstract cond}, can be expressed as another instance of the same combinator. This relies on combinators being able to internalize the monadic effects added when interpreting, thanks to extra parameterization. Galois connections for events. This work focuses on proving an abstract interpreter sound by showing that the individual interpretations of each layer (i.e., each source language feature) are sound in isolation, before composing these proofs together. The soundness at each layer captures that “identical” events should get interpreted into sound subprograms. However, most events have parameters, such as writing to a variable in \texttt{IMP}: \[ \text{Write: } x \rightarrow v \rightarrow \text{memE 1}. \] Hence, the signature for the corresponding abstract event must be different: 5 \[ \text{Write}^\#: x \rightarrow v \rightarrow \text{memE}^\# 1^\#. \] And so we need to relate events through a Galois connection, typically by matching arguments: \[ \text{Write } x \; v \in \text{Write}^\# y \; v^\# \triangleq x = y \land v \in v^\# \] This is done for each individual event when defining the source language. Syntactic soundness. Soundness of an abstract interpreter w.r.t. a concrete semantics expresses that the abstract value computed by the analyzer correctly over-approximates all possible concrete executions. This final statement \( 6 \) is formalized as the \texttt{sound} predicate in Section 6.1. However, this notion cannot be used for partially-interpreted programs because it ignores events (and traces are not comparable due to differences in control flow unfolding). We solve this issue by relying on the syntactic preservation of control flow combinators. We introduce an intermediate soundness predicate, dubbed \texttt{sound'}, which matches the control flow combinators of the concrete and abstract programs syntactically, while relating raw values and events through Galois connections at the leaves. Programs are initially related by \texttt{sound'} because \([\cdot]\) and \([\cdot]^\#\) mirror each other. 6 \footnote{The return type could remain as \( 1 \), but uniformly using a lattice type Galois-connected to the original makes things more consistent.} Figure 6. Overview of the denotation process for a simple Imp program. Denotation from top to bottom. The concrete program is a standard ITrie. The abstract program is an aflow until combinators are unfolded and it gets compiled to an event-less ITree. In both columns, ° represents the concrete/abstract sequence combinator, with initial computation on top and continuation(s) on the side and bottom. Concrete combinators are not materialized in the ITree but tracked propositionally by sound'. Then, since combinators are syntactically preserved by interpretation (and handlers are sound), sound\(^\dagger\) can be maintained through each layer; this is formalized by a collection of interp\_sound\_\^\*T theorems. Summary. Figure 6 summarizes the steps of the monadic interpretation process guided by these observations, for an example Imp program exhibiting effects from all three categories assertE, memE, and arithE introduced in Section 2.1. Starting at the top, an Imp program is ascribed concrete and abstract semantics by \([\mathbb{C}]\) and \([\mathbb{I}]\), with mirrored structures that use the concrete and abstract form of each value, event, and control flow combinator. See for instance, the matching Concrete cond \(\mathbb{C}\) and Abstract cond \(\mathbb{A}\) combinators, and the pair of identically-placed concrete sequence and abstract sequence (represented as \(\circ\) for readability). At this Initial denotation stage, the programs are an itree and an aflow, with all events still in their symbolic form. Each of the next three layers sees one of the event families get interpreted, switching the monads \(M\) and \(M^\#\) by cutting the event signature and adding a new monad transformer. Handled events are replaced with pure computations, while keeping the flow structure because control flow combinators are syntactically preserved by interpretation. This fact combined at each layer with a proof of soundness of Imp’s event handlers implies the preservation of sound\(^\dagger\). Finally, the combinators of the abstract program are unfolded into a proper ITree with no events left. This is when abstract interpretation components such as joins and post-fixpoint approximations are added to the program. Also at this Combinator unfolding stage, the proof of soundness carried by sound\(^\dagger\) is finally proven to imply sound, which ends up being a standard analysis of abstract interpretation algorithms, independent of the language features at hand. With this overview in mind, we now dive into the deeper details of implementing this structure in Coq. 4 Implementing the abstract interpreter We now describe the programmatic side of the library in more details. We first introduce the aflow monad, in which the abstract interpreters are represented, and its effectful interpretations. We showcase the control flow combinators used to program both interpreters. Finally, we brush on the unfolding of these combinators into ITree implementations. 4.1 The aflow monad The aflow monad is defined on Figure 7. We write Type\(^\#\) for a dependent pair of a Type along with a Lattice instance. The monadic structure of aflow E is based on the Ret constructor, with a bind operation recursively propagated through each constructor’s continuation \(k: T \xrightarrow{} aflow E R\). We emphasize that this bind represents a sequence in the abstract interpreter as discussed in Section 3 (which does not carry monadic effects). The Vis constructor provides the freer monad structure and corresponds directly to the Vis constructor of itree. The remaining constructors represent the control flow structures which have dedicated algorithms for abstract interpretation, and are used to build higher-level control flow combinators. Seq sequences two computations (which is not trivial when effects like failure are involved); it is used by the sequence combinator. Join joins the results of two computations; it is used by the cond combinator (which also adds a condition). Fixpoint computes a post-fixpoint of a loop body; it is used by the do and while combinators. Finally, FixpointN computes a post-fixpoint of a family of mutually-tail-recursive functions; it is used by the cfg combinator. These constructors have a few unfamiliar parameters. Focusing on Seq as an illustrative example, we have two extra functions. The parameter early: U \(\xrightarrow{}\) bool indicates whether the initial computation in the sequence may have failed; and post: bool \(\xrightarrow{}\) U \(\xrightarrow{}\) T \(\xrightarrow{}\) T joins the intermediate value \(u: U\) and the final value \(t: T\) if failure may have occurred, as indicated by a boolean parameter. We will shortly show how these functions capture the data-flow paths that arise as a result of interpreting into either the state or failure monads. Overall, aflow will keep track of the source program’s control flow structure through the interpretation process, until it can be unfolded into the appropriate abstract interpretation algorithms once all events have been interpreted. 4.2 Monadic interpretation in aflow Monadic effects in abstract programs are quite different from their counterparts in concrete programs. For instance, failT allows a concrete Imp program to crash. It goes without saying that the corresponding abstract program will not itself crash. Instead, it will simply add crashing states to the set of potential final states with a lattice join. In general, monadic interpretation in the abstract world boils down to two things: 1. using richer lattice to model new effects (e.g., whether the failure path might have been taken), and 2. adding new data-flow paths (e.g., joining potential failure states with the final state). Fixpoint interp_state\(\#\) (h: E \rightharpoonup stateT S (aflow F)) (f: aflow E R) (stateT S (aflow F) R := fun s \rightharpoonup match f with | Ret r := Ret (s, r) | Vis e k \Rightarrow ' (st, t) \leftarrow h e s ;; interp_state\(\#\) h (k t) st | Seq f1 f2 early post k \Rightarrow Seq (interp_state\(\#\) h f1 s) (fun ' (su, u) \Rightarrow interp_state\(\#\) h (f2 u) su) (fun ' (_u, u) \Rightarrow early u) (fun b ' ((su, u) ' (st, t)) \Rightarrow (if b then su \sqcup st else st, post b u t)) (fun ' (st, t) \Rightarrow interp_state\(\#\) h (k t) st) | Join fleft fright k \Rightarrow Join (interp_state\(\#\) h fleft s) (interp_state\(\#\) h fright s) (fun ' (st, t) \Rightarrow interp_state\(\#\) h (k t) st) (* ... *) Fixpoint interp_fail\# (h: E \rightharpoonup failT F S R) (err: unit T) (failT F S R := fun f \rightharpoonup match f with | Ret r := Ret (err, r) | Vis e k \Rightarrow ' (et, t) \leftarrow h e e ;; interp_fail\# h (k t) (err \sqcup et) | Seq f1 f2 early post k \Rightarrow Seq (interp_fail\# h f1 err) (fun ' (eu, u) \Rightarrow interp_fail\# h (f2 u) eu) (fun ' (eu, u) \Rightarrow eu ?= () \rightharpoonup early u) (fun b ' ((eu, u) ' (et, t)) \Rightarrow (eu \sqcup et, post (eu ?= ()) \rightharpoonup (b) u t)) (fun ' (et, t) \Rightarrow interp_fail\# h (k t) et) | Join fleft fright k \Rightarrow Join (interp_fail\# h fleft err) (interp_fail\# h fright err) (fun ' (et, t) \Rightarrow interp_fail\# h (k t) et) (* ... *) Figure 8. Summary of the monads in our implementation. We implement support for two effects: stateful and failing computations. The definitions for both concrete and abstract transformers are summarized in Figure 8. The latter are part of the analyzer's interface, so there isn’t one single concrete definition; for instance, failT\# could separate the return values from the success and failure cases. The interpreters for stateT\# and failT\# are shown on Figure 9. Unlike in ITree where a single interp function handles all monads, in aflow each monad transformer is unique. This is because they apply their monadic effect in each flow-related constructor by updating the extra functions (here, the early and post parameters in Seq—Join has none). Notice, crucially, how this enables all sub-programs and continuations to be interpreted transparently. In the state monad, an extra global state \(s : S\) is provided as input and returned along the output. Vis supplies it to the event handler, which allows state events to be substituted with pure computations. Seq’s extra functions are updated to indicate that state does not cause failure (early unchanged) but it is affected if a failure occurs elsewhere (post joins it in addition to the return values when \(b = true\)). The failure monad follows a similar structure. Notice how in Vis there is no early exit between \(h e\) (the interpreted event) and the continuation: this is because bind in aflow is a sequence in the abstract interpreter. By contrast, an early exit is added in Seq, which is what makes it more complex than a sequence in a classical analyzer. early is updated to announce potential failure if the error flag \(eu\) is \(\neq ()\), and post propagates this information to other transformers by setting \(b = true\) in its call to the wrapped post. Importantly, both interpretations of Seq and Join are other instances of Seq and Join. This stability under interpretation is later lifted to flow combinators, a key fact for establishing the preservation of sound' by interpretation. ### 4.3 Implementing control flow combinators Although the ITree library already provides standard combinators to write concrete interpreters such as the one of Section 2.1, maintaining the tight connection between the Fixpoint unfold (f: aflow Θ R): itree Θ R := match f with | Ret r ⇒ itree.ret r | Seq f1 f2 early post k ⇒ u ← unfold f1;; t ← unfold (f2 u);; unfold (k (post (early u) u t)) | Join fleft fright k ⇒ rl ← unfold fleft;; rr ← unfold fright;; unfold (k (rl ∪ rr)); (∗ ... ∗) **Figure 11.** Equations for unfolding Ret, Seq and Join. concrete and abstract interpreters at each layer, as illustrated on Figure 6, requires a precise control over the way the concrete semantics is defined. Our library hence provides control flow combinators in concrete/abstract pairs, systematically providing (1) a proof of syntactic preservation by pure, stateful, and failure interpretation for each, and (2) a proof that the unfolded abstract combinator soundly approximates the concrete one. In practice, the first requirement means that combinators must express the most general form of a condition/loop/etc, accounting for all supported monadic effects. Our library currently supports five combinators: sequence, cond, a binary conditional branching; do, a do/while loop which also supports passing an accumulator value from each iteration to the next; while, a simple wrapper around do which showcases how combinators can build upon each other; and cfg, a control-flow graph structure with a variable number of basic blocks as arguments in the style of assembler, as a more advanced example. Each combinator is adequately parametrized such that it can internalize pure, stateful and failure effects. This informal statement is captured by establishing for each version of each combinator, and for each of the three interpretations of this combinator, an equation expressing the interpreted result in terms of the initial combinator. We illustrate the sequence combinator, and refer the interested reader to our formal development for the others. **Sequence.** The sequence combinators are depicted on Figure 10. On the concrete side, the combinator essentially refines the ITree bind with a device allowing it to absorb state and failure transformers. Rather than immediately distributing the result of the first computation to the continuation, the function dist decides whether to halt midway in case of failure. When interpreting, dist absorbs the monadic effect: for instance, interpreting into failT turns U and T into option types, and dist is updated to map a None return from the first half to SR_Fail, thus taking care of the early exit. The abstract version is straightforward since we took care of designing the Seq aflow combinator with sufficient parametrization to internalize the effects. It therefore wraps around the constructor with an empty continuation. **Other combinators.** The concrete versions of the other combinators present no surprise to readers familiar with other works based on ITrees: cond relies on Coq’s meta-level if, do and while on the iter combinator as illustrated in Section 2.1, and cfg on the way Xia et al. [39] for Asm or Zakowski et al. [40] for LLVM IR resolve calls in a CFG. Similarly to what happens for sequence, elementary versions of cond, do, and cfg are direct wrappers around the corresponding aflow constructors. While it is not the focus on this paper, we illustrate in our library that more precise combinators, for instance variants of cond taking the guard into account can of course be built. Improving the precision of a combinator in an existing verified analyzer written in our framework only requires to locally reestablish the preservation under the three interpreters, and the soundness after unfolding. Finally, while illustrates a simple example of building a combinator on top of another (do). ### 4.4 Unfolding flow combinators Once interpretation is finished, flow combinators are unfolded by the unfold function, recursively mapping aflow computations with empty interfaces to itree ones. Figure 11 details Ret, Seq and Join: leaves are trivially translated, while Seq and Join are finally free to be threaded as simple binds. There are no longer any Vis at this stage, since there are no events. The last two constructors unfold into fixpoint approximations—we omit them here. The soundness of this unfolding process, including that of fixpoint approximation schemes, is established when proving the soundness for each combinator—indeed of the particular features of the source language. ### 5 Case studies **Imp.** We finally have all the tools in hand to write our abstract interpreter for the Imp language introduced in Section 2. Following the same canvas as in the concrete case, we first craft our event interface. Naturally, the events are identical to the concrete ones, with one difference: they operate over an abstract domain of values V equipped with a lattice structure parametrizing the analysis. ``` Variant arithE : Type → Type := | ComputeE (op : Op) (1 r : V) : arithE V. Variant memE : Type → Type := | ReadE (a : X) : memE V | WriteE (a : X) (v : V) : memE I. Variant assertE : Type → Type := | AssertE (v : V) : assertE I. ``` Writing again impE for (arithE + memE + assertE), we write the analyzer in the aflow monad using the library’s We wrap the Impalone program. As a minimal example, consider the following program using Coq’s extraction feature and run it as a standalone program. As a minimal example, consider the following OCaml program: ``` x := 2; y := 0; while do (y := 1; x := sub(x, 1); ) ``` and run it as a standard interpreter. Abstract interpreters achieve to define the abstract interpreter, in the concrete case, specifies how failure is treated in the abstract domain. As map over abstract values considered. Finally, handling asserts handler boils down to the implementation of the abstract over the abstract domain of values considered. The memory for arithmetic contains the corresponding transfer functions that they are enriched during monadic interpretation. users, but we keep these parameters explicit here to stress level combinators are meant to be directly exposed to the carrying unit values around instead. Of course, these higher extra parameters are preset to embody the initial absence of it further specializes it to the present case where neither the is more complex, but follows a similar principle. In addition on Figure 12, the code follows precisely the structure of its Figure 12. Imp: representation of commands into aflow abstract combinators, as described in Section 4.3. Depicted on Figure 12, the code follows precisely the structure of its concrete counterpart, but relying on abstract combinators. We wrap the sequence combinator into a top-level one whose extra parameters are preset to embody the initial absence of failure nor global state. The wrapper for the while combinator is more complex, but follows a similar principle. In addition, it further specializes it to the present case where neither the condition nor the body being iterated take arguments, hence carrying unit values around instead. Of course, these higher level combinators are meant to be directly exposed to the users, but we keep these parameters explicit here to stress that they are enriched during monadic interpretation. Remains to code the three abstract handlers for imp*; they capture standard bits of abstract interpreters. The handler for arithmetic contains the corresponding transfer functions over the abstract domain of values considered. The memory handler boils down to the implementation of the abstract map over abstract values considered. Finally, handling asserts specifies how failure is treated in the abstract domain. As in the concrete case, hoist and the three (abstract) effectful interpreters achieve to define the abstract interpreter, eval* ``` Definition eval* (s : C) : fail! (state* s* (aflow @0)) 1* := hoist (fun u => hoist (interp Pure* h_arith) (interp State* h_state u)) (interp Fail* h_fail* [s] bot). ``` We can then extract this interpreter into an OCaml program using Coq’s extraction feature and run it as a standalone program. As a minimal example, consider the following Imp program: ``` x := 5; assert(y); z := 6; ``` The analyzer returns a final state indicating x ∈ (−∞, 2], y ∈ {0, 1} and z ∈ {5, 6}. The lower bound on x is the direct result of widening after decrementing in the loop. The simple abstract condition we use does not notice the decidable condition in the first iteration, thus allowing y = 0. This causes the assert to be analyzed as potentially failing, so the final state (which might be at the assert) has either z = 5 or z = 6. ASM. To illustrate the expressivity of our framework, we write an abstract interpreter for Asm, a toy control flow graph language featuring registers and memory. This language presents two layers of interpretation, both stateful. Its abstract aflow representation relies on the cfg* combinator, which computes a fixpoint over a vector of blocks. Both its definition and proof are very similar to that of Imp’s. 6 Layered proof of soundness We finally put our framework to use to certify the soundness of a monadic static analyzer w.r.t. a monadic concrete semantics built using the combinators our library provides. The predicate sound captures the soundness of an abstract programs w.r.t. a concrete one whose interfaces are empty. It describes the traditional intuition that any value that the concrete program could return must be covered through the Galois connection by the abstract value returned by the unfolded abstract program: ``` sound (p : itree @0 R) (p* : aflow @0 R*) ≜ ∀ r. p returns r → unfold p* returns r* → r ∈ r*, ``` where “p returns r” expresses that the computation terminates with value r.10 Note that in case of computations obtained by the construction of monadic interpreters, the return types R and R* include at this stage global states and failure flags, so every feature of the source language is covered by this single statement. The top-level theorem we establish then simply states that the interpreters are related by sound. For instance for Imp: ``` ∀ (c : C) s s*, s ∈ s* → sound (eval c s) (eval* c s*). ``` In order to stress the reusability of our approach, we first describe the key results provided by the library, before highlighting the language- and analyzer-specific proof obligations remaining. 6.1 Generic meta-theory As discussed in Section 3, most of the proof of soundness is conducted over a stronger notion of soundness, and is only lowered down to sound once all events have been interpreted. --- 10 The signatures being empty, there is at most one such leaf. This notion is dubbed syntactic soundness, and is captured by a predicate \[ \text{sound'} (p : \text{itree } E \ R) (p' : \text{aflow } E^r R^r) : \text{Prop} \] which asserts that \(p\) and \(p'\) have identical structure as flow combinator trees, where nodes carry combinators, and leaves carry Galois-connected return values and events. Contrary to sound, it can relate computations with non-empty signatures: it is used to relate the analyzer starting at the first stage of representation. **Soundness preservation by interpretation.** The library provides a battery of theorems expressing the preservation of sound’ by interpretation by pure, stateful, and failure interpretations. The proof of each such \(\text{interp}_{\text{sound}} \ast T\) lemma relies on the syntactic preservation of the aflow combinators (provided in the library), and assumes a language-specific proof of soundness of the event handler used for the interpretation. For instance, the preservation theorem for the state monad is stated as \[ \text{Lemma } \text{interp}_{\text{sound}}_{\text{state}} T \] \[ (h : E \rightarrow \text{state} T S (\text{itree F}))(h^r : E^r \rightarrow \text{state} T S^r (\text{aflow f}^r))(t : \text{itree } E (R))(f^r : \text{aflow } E^r R^r) s s^r : s \in s^r \rightarrow \] \[ \text{handler}_{\text{sound}}_{\text{state}} T h h^r \rightarrow \] \[ \text{sound'} (\text{interp}_{\text{state}} h t s) (\text{interp}_{\text{state}} h^r f^r s^r). \] where \(\text{handler}_{\text{sound}}_{\text{state}} T h h^r\) asserts that the interpretations of Galois-connected events by \(h\) and \(h^r\) are related by sound’. **From syntactic to semantic soundness.** Each pair of combinators provided by the library is proven to be sound: intuitively, given semantically sound inputs, they lead to semantically sound computations. These individual lemmas capture the soundness of the abstract interpretation algorithms that flow combinators unfold into. Given the high degree of parametrization of the combinators, these statements are slightly intricate to state, but strictly follow this intuition. For instance, the case of sequence is \[ \text{Lemma } \text{sound}_{\text{seq}} \] \[ (\text{dist} : U \rightarrow \text{SeqResult } T R) (k : T \rightarrow \text{itree } E R)(f^r : \text{aflow } E^r T^r) (k^r : T^r \rightarrow \text{aflow } E^r T^r)(\text{early}^r : T^r \rightarrow \text{bool}) \rightarrow (\text{post}^r : \text{bool} \rightarrow T^r \rightarrow R^r \rightarrow R^r) \rightarrow \] \[ (\text{lsound}_{\text{tf}} : \text{sound } t f f^r)(\text{hsound}_{\text{k}} : \forall t t^r, t \in t^r \rightarrow \text{sound } (k t) (k^r t^r))(\text{hsound}_{\text{dist}} : \forall u t u^r, u \in t^r \rightarrow \text{match } \text{dist } u \text{ with } \mid \text{SR}_\text{Continue } t \Rightarrow t \rightarrow t^r \mid \text{SR}_\text{Fail } r \Rightarrow \text{early}^r t^r = \text{true } \land r \in \text{post}^r \text{ true } t^r r^r) \rightarrow \] \[ (\text{hpost} : \forall b t t^r, r^r \subseteq \text{post}^r b t^r r^r) : \text{sound } (\text{sequence } t \text{ dist } k) (\text{sequence}^r t^r k^r \text{ early}^r \text{ post}^r). \] Its hypotheses (which we do not detail here) are either about the soundness of its sub-programs, or formalize the requirements for the extra functions (dist, early and post), which are carried by sound’ along with the syntactic correspondence of combinators. These individual combinator theorems culminate in the library-provided \(\text{sound}_{\text{unfold}}\) lemma: \[ \text{Lemma } \text{sound}_{\text{unfold}} : \forall (p : \text{itree } \emptyset R) (p^r : \text{aflow } \emptyset R^r), \] \[ \text{sound'} p p^r \rightarrow \text{sound } p p^r, \] which allows to conclude a formal proof that the abstract program safely approximates its concrete original. ### 6.2 User-specific proof obligations: the case of IMP All control flow combinators needed to evaluate IMP are provided by the library. We plug in a simple interval domain, provided by the library as well. The remaining proof effort is hence minimal. Following Figure 6 from top to bottom, the proof is built in three pieces. First, we establish the soundness of the representations, i.e. \(\forall c, \text{sound'} [c] [c]^r\). This proof is entirely mechanical, by induction on \(c\), each case reducing to the definition of sound’: it simply captures the structural similarity between both interpreters. We then transport this syntactic soundness through the three layers of interpretation. In each, the corresponding \(\text{sound}_{\text{interp}} \ast T\) lemma is provided by the library, though it still expects us to prove that each pair of handlers is sound: for assertE, by soundness of num.isfalse; for memE, based on properties of the map data structures used to associate variables with concrete and abstract values; for arithE, by soundness of the transfer functions over intervals. By chaining these proofs, we obtain the syntactic soundness of the whole abstract interpreter. Finally, since all events have been interpreted away, we derive the semantic soundness of the interpreter by application of sound_{\text{unfold}}. ### 6.3 Extending the library While the combinators provided are generic and expressive enough to cover a wide range of applications, realistic languages will call for new combinators. We sketch the process of extending the library itself to support more constructions. While it has not been the focus of this work, new non-relational abstract domains can be added by instantiating the Lattice class and a relevant domain class such as NumericalDomain. Adding a new control flow structure requires to craft its monad-generic form as a new pair of combinators, possibly building upon existing ones. This process should usually not require to extend aflow—but if needed, the unfolding of the new constructor must be additionally defined. The new combinators must be proved to be preserved by each interpreter, and to be semantically sound one with another. Finally, the syntactic soundness must be extended with a new constructor in sound' capturing the new pair, and the case in sound_unfold must be discharged. Adding support for a new effect is naturally more transversal. A new monad transformer would require extending all control flow structures to ensure they can internalize the new monadic effect; this is the most challenging extension. Once this design question is resolved, each combinator must be proved to preserve the new interpreter—which is typically straightforward. 7 Related Work The seminal paper by Cousot and Cousot [7] has spawned an exceptionally rich literature around the abstract interpretation framework. We refer the interested reader to recent introductory books [6, 33], and focus on works directly related to the peculiarities of our approach: mechanization and modularity. Mechanized abstract interpreters. The first attempt at mechanizing abstract interpretation in type theory is probably due to Monniaux [28]. Later on, Pichardie identified during his PhD [29] that the asymmetric γ-only formulation of the framework was the key to alleviating issues with the non-constructivity of the abstraction function encountered in Monniaux’s approach. We inherit from this design. The approach eventually culminated in the Verasco [16] static analyzer: a verified abstract interpreter for the C language combining rich abstract domains to attain an expressiveness sufficient for establishing the absence of undefined behavior in realistic programs. In particular, the analyzer is plugged into CompCert [24] in order to discharge the precondition to its correctness theorem. Verasco supports a notion of modularity essentially orthogonal to the one we propose in the present work: they introduce a system of inter-domain communication based on channels inspired by Astrée [8]. Extending our work to support such complex abstract domain combinations and scaling from toy languages to realistic analyzers like Verasco is naturally a major perspective. In contrast, we emphasize that Verasco offers none of the core contributions we propose in our approach: no code reuse, no modularity in terms of effects, and a fuel-based analyzer to avoid having to prove the termination of the analyzer. Skeletal semantics [2] have been leveraged to derive abstract interpreters in a modular fashion that shares commonalities with our approach. Skeletons and their interpretations provide a reusable meta-language in which to code the concrete and abstract semantics of the languages in a similar way we exploit TTrees and aflow with handlers. Despite this superficial similarity, the technical implementations are completely different: in-depth comparison of the two approaches would cause for a fruitful avenue. Restricting ourselves to γ-only formulations sacrifices part of the abstract interpretation theory: the so-called "computational" style, deriving an abstract interpreter correct by construction from a concrete one. Darais and Van Horn have introduced Constructive Galois Connections [11, 12] to tackle this issue, and formalized their work in Agda. Big-step abstract interpreters. A wide body of work has sought to modularize and improve code reuse in the design and verification of abstract interpreters. Most of them share conceptually with our work the use of a monadic encoding relying on uninterpreted symbols that gets refined in alternate ways. Bodin et al. [2], previously mentioned, falls into this category, but numerous other non-mechanized contributions have been done in this realm. Most notably, Darais et al. [9] adapt Van Horn and Mighty’s so-called Abstracting Abstract Machine [35, 38] methodology to build abstract interpreters for higher order languages using definitional interpreters written in a monadic style, rather than low level machines. Written in a general purpose functional language, their approach relies on a representation of the program with open recursion and uninterpreted operations, further refined into concrete, collecting and abstract semantics. In order to ease the construction of such monadic interpreters, Darais et al. have also identified so-called Galois Transformers [10], well behaved monad transformers that transport Galois connections and mappings to suitable executable transition systems. Keidel et al. [18, 20] have proposed a framework for modularizing the concrete and abstract semantics based on arrows [15], a generalization of monads. Arrows roughly play the role of Skeletons in [2], and of the combination of concrete signatures and aflow in ours. The connection between these abstractions would deserve a more thorough analysis. Recently, Keidel et al. have considered the modular construction of fix-point algorithms for big-step abstract interpreters [19]. This endeavor is orthogonal to our contributions and could hopefully be formalized and incorporated. References Abstract Interpreters: a Monadic Approach to Modular Verification in Theoretical Computer Science 129 (09 2013). https://doi.org/10.4204/EPTCS.129.19
{"Source-Url": "https://inria.hal.science/hal-04385725/file/itree-ai.pdf", "len_cl100k_base": 12750, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 57151, "total-output-tokens": 17062, "length": "2e13", "weborganizer": {"__label__adult": 0.0004189014434814453, "__label__art_design": 0.0003561973571777344, "__label__crime_law": 0.0003123283386230469, "__label__education_jobs": 0.0005478858947753906, "__label__entertainment": 7.56978988647461e-05, "__label__fashion_beauty": 0.00016748905181884766, "__label__finance_business": 0.00018453598022460935, "__label__food_dining": 0.0004069805145263672, "__label__games": 0.000637054443359375, "__label__hardware": 0.000614166259765625, "__label__health": 0.0005335807800292969, "__label__history": 0.0002543926239013672, "__label__home_hobbies": 8.630752563476562e-05, "__label__industrial": 0.00038504600524902344, "__label__literature": 0.0003974437713623047, "__label__politics": 0.00031757354736328125, "__label__religion": 0.0006074905395507812, "__label__science_tech": 0.0165557861328125, "__label__social_life": 0.00010138750076293944, "__label__software": 0.00408172607421875, "__label__software_dev": 0.9716796875, "__label__sports_fitness": 0.00032830238342285156, "__label__transportation": 0.000598907470703125, "__label__travel": 0.000217437744140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64376, 0.02077]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64376, 0.22735]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64376, 0.83679]], "google_gemma-3-12b-it_contains_pii": [[0, 1015, false], [1015, 5961, null], [5961, 11536, null], [11536, 16133, null], [16133, 20410, null], [20410, 24443, null], [24443, 24940, null], [24940, 30192, null], [30192, 34010, null], [34010, 39187, null], [39187, 44610, null], [44610, 50721, null], [50721, 56612, null], [56612, 64376, null], [64376, 64376, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1015, true], [1015, 5961, null], [5961, 11536, null], [11536, 16133, null], [16133, 20410, null], [20410, 24443, null], [24443, 24940, null], [24940, 30192, null], [30192, 34010, null], [34010, 39187, null], [39187, 44610, null], [44610, 50721, null], [50721, 56612, null], [56612, 64376, null], [64376, 64376, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64376, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64376, null]], "pdf_page_numbers": [[0, 1015, 1], [1015, 5961, 2], [5961, 11536, 3], [11536, 16133, 4], [16133, 20410, 5], [20410, 24443, 6], [24443, 24940, 7], [24940, 30192, 8], [30192, 34010, 9], [34010, 39187, 10], [39187, 44610, 11], [44610, 50721, 12], [50721, 56612, 13], [56612, 64376, 14], [64376, 64376, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64376, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5c9c2be7a2580c63d52382a93a99cea6e6ccd8ca
CNTR: Lightweight OS Containers Citation for published version: Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Proceedings of the 2018 USENIX Annual Technical Conference (USENIX ATC '18) General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. CNTR: Lightweight OS Containers Jörg Thalheim, Pramod Bhatotia University of Edinburgh Pedro Fonseca University of Washington Baris Kasikci University of Michigan Abstract Container-based virtualization has become the de-facto standard for deploying applications in data centers. However, deployed containers frequently include a wide-range of tools (e.g., debuggers) that are not required for applications in the common use-case, but they are included for rare occasions such as in-production debugging. As a consequence, containers are significantly larger than necessary for the common case, thus increasing the build and deployment time. CNTR provides the performance benefits of lightweight containers and the functionality of large containers by splitting the traditional container image into two parts: the “fat” image — containing the tools, and the “slim” image — containing the main application. At run-time, CNTR allows the user to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. To achieve this, CNTR transparently combines the two container images using a new nested namespace, without any modification to the application, the container manager, or the operating system. We have implemented CNTR in Rust, using FUSE, and incorporated a range of optimizations. CNTR supports the full Linux filesystem API, and it is compatible with all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). Through extensive evaluation, we show that CNTR incurs reasonable performance overhead while reducing, on average, by 66.6% the image size of the Top-50 images available on Docker Hub. 1 Introduction Containers offer an appealing, lightweight alternative to VM-based virtualization (e.g., KVM, VMware, Xen) that relies on process-based virtualization. Linux, for instance, provides the cgroups and namespaces mechanisms that enable strong performance and security isolation between containers [24]. Lightweight virtualization is fundamental to achieve high efficiency in virtualized datacenters and enables important use-cases, namely just-in-time deployment of applications. Moreover, containers significantly reduce operational costs through higher consolidation density and power minimization, especially in multi-tenant environments. Because of all these advantages, it is no surprise that containers have seen wide-spread adoption by industry, in many cases replacing altogether traditional virtualization solutions [17]. Despite being lightweight, deployed containers often include a wide-range of tools such as shells, editors, coreutils, and package managers. These additional tools are usually not required for the application’s core function — the common operational use-case — but they are included for management, manual inspection, profiling, and debugging purposes [64]. In practice, this significantly increases container size and, in turn, translates into slower container deployment and inefficient datacenter resource usage (network bandwidth, CPU, RAM and disk). Furthermore, larger images degrade container deployment time [52, 44]. For instance, previous work reported that downloading container images account for 92% of the deployment time [52]. Moreover, a larger code base directly affects the reliability of applications in datacenters [50]. Given the impact of using large containers, users are discouraged from including additional tools that would otherwise simplify the process of debugging, deploying, and managing containers. To mitigate this problem, Docker has recently adopted smaller run-times but, unfortunately, these efforts come at the expense of compatibility problems and have limited benefits [13]. To quantify the practical impact of additional tools on the container image size, we employed Docker Slim [11] on the 50 most popular container images available on the Docker Hub repository [10]. Docker Slim uses a combination of static and dynamic analyses to generate smaller-sized container images, in which, only files needed by the core application are included in the final image. The results of this experiment (see Figure 5) are encouraging: we observed that by excluding unnecessary files from typical containers it is possible to reduce the container size, on average, by 66.6%. Similarly, others have found that a only small subset (6.4%) of the container images is read in the common case [53]. CNTR addresses this problem by building lightweight containers that still remain fully functional, even under common use-cases (e.g., debugging and profiling). CNTR enables users to deploy the application and its dependencies, while the additional tools required for other use-cases are supported by expanding the container “on-demand”, during runtime (Figure 1 (a)). More specifically, CNTR splits the traditional container image into two parts: the “fat” image containing the rarely used tools and the “slim” image containing the core application and its dependencies. During runtime, CNTR allows the user of a container to efficiently deploy the “slim” image and then expand it with additional tools, when and if necessary, by dynamically attaching the “fat” image. As an alternative to using a “fat” image, CNTR allows tools from the host to run inside the container. The design of CNTR simultaneously preserves the performance benefits of lightweight containers and provides support for additional functionality required by different application workflows. The key idea behind our approach is to create a new nested namespace inside the application container (i.e., “slim container”), which provides access to the resources in the “fat” container, or the host, through a FUSE filesystem interface. CNTR uses the FUSE system to combine the filesystems of two images without any modification to the application, the container implementation, or the operating system. CNTR selectively redirects the filesystem requests between the mount namespace of the container (i.e., what applications within the container observe and access) and the “fat” container image or the host, based on the filesystem request path. Importantly, CNTR supports the full Linux filesystem API and all container implementations (i.e., Docker, rkt, LXC, systemd-nspawn). We evaluated CNTR across three key dimensions: (1) functional completeness – CNTR passes 90 out of 94 (95.74%) xfstests filesystem regression tests [14] supporting applications such as SQLite, Postgres, and Apache; (2) performance – CNTR incurs reasonable overheads for the Phoronix filesystem benchmark suite [18], and the proposed optimizations significantly improve the overall performance; and lastly, (3) effectiveness – CNTR’s approach on average results in a 66.6% reduction of image size for the Top-50 images available on Docker hub [10]. We have made publicly available the CNTR implementation along with the experimental setup [6]. 2 Background and Motivation 2.1 Container-Based Virtualization Containers consist of a lightweight, process-level form of virtualization that is widely used and has become a cornerstone technology for datacenters and cloud computing providers. In fact, all major cloud computing providers (e.g., Amazon [2], Google [16] and Microsoft [4]) offer Containers as a Service (CaaS). Container-based virtualization often relies on three key components: (1) the OS mechanism that enforces the process-level isolation (e.g., the Linux cgroups [41] and namespaces [40] mechanisms), (2) the application packaging system and runtime (e.g., Docker [9], Rkt [38]), and (3) the orchestration manager that deploys, distributes and manages containers across machines (e.g., Docker Swarm [12], Kubernetes [22]). Together, these components enable users to quickly deploy services across machines, with strong performance and security isolation guarantees, and with low-overheads. Unlike VM-based virtualization, containers do not include a guest kernel and thus have often smaller memory footprint than traditional VMs. Containers have important advantages over VMs for both users and data centers: 1. Faster deployment. Containers are transferred and deployed faster from the registry [44]. 2. Lower resource usage. Containers consume fewer resources and incur less performance overhead [62]. 3. Lower build times. Containers with fewer binaries and data can be rebuilt faster [64]. Unfortunately, containers in practice are still unnecessarily large because users are forced to decide which auxiliary tools (e.g., debugging, profiling, etc.) should be included in containers at packaging-time. In essence, users are currently forced to strike a balance between lightweight containers and functional containers, and end up with containers that are neither as light nor as functional as desirable. 2.2 Traditional Approaches to Minimize Containers The container-size problem has been a significant source of concern to users and developers. Unfortunately, existing solutions are neither practical nor efficient. An approach that has gained traction, and has been adopted by Docker, consists of packing containers using smaller base distributions when building the container runtime. For instance, most of Docker’s containers are now based on the Alpine Linux distribution [13], resulting in smaller containers than traditional distributions. Alpine Linux uses the musl library, instead of glibc, and bundles busybox, instead of coreutils — these differences enable a smaller container runtime but at the expense of compatibility problems caused by runtime differences. Further, the set of tools included is still restricted and fundamentally does not help users when less common auxiliary tools are required (e.g., custom debugging tools). The second approach to reduce the size of containers relies on union filesystems (e.g., UnionFS [60]). Docker, for instance, enables users to create their containers on top of commonly-used base images. Because such base images are expected to be shared across different containers (and already deployed in the machines), deploying the container only requires sending the diff between the base image and the final image. However, in practice, users still end up with multiple base images due to the use of different base image distributions across different containers. Another approach that has been proposed relies on the use of unikernels [57, 58], a single-address-space image constructed from a library OS [61, 49, 65]. By removing layers of abstraction (e.g., processes) from the OS, the unikernel approach can be leveraged to build very small virtual machines—this technique has been considered as containerization because of its low overhead, even though it relies on VM-based virtualization. However, unikernels require additional auxiliary tools to be statically linked into the application image; thus, it leads to the same problem. 2.3 Background: Container Internals The container abstraction is implemented by a userspace container run-time, such as Docker [9], rkt [38] or LXC [37]. The kernel is only required to implement a set of per-process isolation mechanisms, which are inherited by child processes. This mechanism is in turn leveraged by container run-times to implement the actual container abstraction. For instance, applications in different containers are isolated and have all their resources bundled through their own filesystem tree. Crucially, the kernel allows the partitioning of system resources, for a given process, with very low performance overhead thus enabling efficient process-based virtualization. The Linux operating system achieves isolation through an abstraction called namespaces. Namespaces are modular and are applied to individual processes inherited by child processes. There are seven namespaces to limit the scope what a process can access (e.g., filesystem mountpoints, network interfaces, or process IDs[40]). During the container startup, by default, namespaces of the host are unshared. Hence, processes inside the container only see files from their filesystem image (see Figure 1 (a)) or additional volumes, that have been statically added during setup. New mounts on the host are not propagated to the container since by default, the container runtime will mount all mount points as private. 2.4 Use-cases of CNTR We envision three major use cases for CNTR that cover three different debugging/management scenarios: **Container to container debugging in production.** CNTR enables the isolation of debugging and administration tools in *debugging containers* and allows application containers to use debugging containers on-demand. Consequently, application containers become leaner, and the isolation of debugging/administration tools from applications allows users to have a more consistent debugging experience. Rather than relying on disparate tools in different containers, CNTR allows using a single debugging container to serve many application containers. **Host to container debugging.** CNTR allows developers to use the debugging environments (e.g., IDEs) in their host machines to debug containers that do not have these environments installed. These IDEs can sometimes take several gigabytes of disk space and might be not even compatible with the distribution of the container image is based on. Another benefit of using CNTR in this context is that development environments and settings can be also efficiently shared across different containers. **Container to host administration and debugging.** Container-oriented Linux distributions such as CoreOS [8] or RancherOS [30] do not provide a package manager and users need to extend these systems by installing containers even for basic system services. CNTR allows a user of a privileged container to access the root filesystem of the host operating system. Consequently, administrators can keep tools installed in a debug container while keeping the host operating system’s filesystem lean. 3 Design In this section, we present the detailed design of CNTR. 3.1 System Overview **Design goals.** CNTR has the following design goals: - **Generality:** CNTR should support a wide-range of workflows for seamless management and problem diagnosis (e.g., debugging, tracing, profiling). - **Transparency:** CNTR should support these workflows without modifying the application, the container manager, or the operating system. Further, we want to be compatible with all container implementations. - **Efficiency:** Lastly, CNTR should incur low performance overheads with the split-container approach. **Basic design.** CNTR is composed of two main components (see Figure 1 (a)): a nested namespace, and the CNTRFS filesystem. In particular, CNTR combines slim and fat containers by creating a new *nested namespace* to merge the namespaces of two containers (see Figure 1 (b)). The nested namespace allows CNTR to selectively break the isolation between the two containers by transparently redirecting the requests based on the accessed path. CNTR achieves this redirection using the CNTRFS filesystem. CNTRFS is mounted as the root filesystem (/), and the application filesystem is remounted to another path (/var/lib/cntr) in the nested namespace. CNTRFS implements a filesystem in userspace (FUSE), where the CNTRFS server handles the requests for auxiliary tools installed on the fat container (or on the host). At a high-level, CNTR connects with the CNTRFS server via the generic FUSE kernel driver. The kernel driver simply acts as a proxy between processes accessing CNTRFS, through Linux VFS, and the CNTRFS server running in userspace. The CNTRFS server can be in a different mount namespace than the nested namespace, therefore, CNTR establishes a proxy between two mount namespaces through a request/response protocol. This allows a process that has all its files stored in the fat container (or the host) to run within the mount namespace of the slim container. **CNTR workflow.** CNTR is easy to use. The user simply needs to specify the name of the “slim” container and, in case the tools are in another container, the name of the “fat” container. CNTR exposes a shell to the user that has access to the resources of the application container as well as the resources forwarded from the fat container. Figure 1 (a) explains the workflow of CNTR when a user requests to access a tool from the slim container. CNTR transparently resolves the requested path for the tool in the nested namespace (B). Figure 1 (b) shows an example of CNTR’s nested namespace, where the requested tool (e.g., gdb) is residing in the fat container. After resolving the path, CNTR redirects the request via FUSE to the fat container (C). Lastly, CNTR serves the requested tool via the FUSE interface (D). Behind the scenes, CNTR executes the following steps: 1. **Resolve container name to process ID and get container context.** CNTR resolves the name of the underlying container process IDs and then queries the kernel to get the complete execution context of the container (container namespaces, environment variables, capabilities, ...). 2. **Launch the CNTRFS server.** CNTR launches the CNTRFS server. CNTR launches the server either directly on the host or inside the specified “fat” container containing the tools image, depending on the settings that the user specified. 3. **Initialize the tools namespace.** Subsequently, CNTR attaches itself to the application container by setting up a nested mount namespace within the namespace of the application container. CNTR then assigns a forked process to the new namespace. Inside the new namespace, the CNTR process proceeds to mount CNTRFS, providing access to files that are normally out of the scope of the application container. 4. **Initiate an interactive shell.** Based on the configuration files within the debug container or on the host, CNTR executes an interactive shell, within the nested namespace, that the user can interact with. CNTR forwards its input/output to the user terminal (on the host). From the shell, or through the tools it launches, the user can then access the application filesystem under /var/lib/cntr and the tools filesystem in /. Importantly, tools have the same view on system resources as the application (e.g., /proc/ptrace). Furthermore, to enable the use of graphical applications, CNTR forwards Unix sockets from the host/debug container. ### 3.2 Design Details This section explains the design details of CNTR. #### 3.2.1 Step #1: Resolve Container Name and Obtain Container Context Because the kernel has no concept of a container name or ID, CNTR starts by resolving the container name, as defined by the used container manager, to the process IDs running inside the container. CNTR leverages wrappers based on the container management command line tools to achieve this translation and currently, it supports Docker, LXC, rkt, and systemd-nspawn. After identifying the process IDs of the container, CNTR gathers OS-level information about the container namespace. CNTR reads this information by inspecting the /proc filesystem of the main process within the container. This information enables CNTR to create processes inside the container in a transparent and portable way. In particular, CNTR gathers information about the container namespaces, cgroups (resource usage limits), mandatory access control (e.g., AppArmor [26] and SELinux [19] options), user ID mapping, group ID mapping, capabilities (fine-grained control over super-user permissions), and process environment options. Additionally, CNTR could also read the seccomp options, but this would require non-standard kernel compile-time options and generally has limited value because seccomp options have significant overlap with the capability options. CNTR reads the environment variables because they are heavily used in containers for configuration and service discovery [36]. Before attaching to the container, in addition, to gather the information about the container context, the CNTR process opens the FUSE control socket (/dev/fuse). This file descriptor is required to mount the CNTRFS filesystem, after attaching to the container. ### 3.2.2 Step #2: Launch the CNTRFS Server The CNTRFS is executed either directly on the host or inside the “fat” container, depending on the option specified by the user (i.e., the location of the tools). In the host case the CNTRFS server simply runs like a normal host process. In case the user wants to use tools from the “fat” container, the CNTRFS process forks and attaches itself to the “fat” container. Attaching to the “fat” container is implemented by calling the setns() system call, thereby assigning the child process to the container namespace that was collected in the previous step. After initialization, the CNTRFS server waits for a signal from the nested namespace (Step #3) before it starts reading and serving the FUSE requests (reading before an unmounted FUSE filesystem would otherwise return an error). The FUSE requests then will be read from the /dev/fuse file descriptor and redirected to the filesystem of the server namespace (i.e., host or fat container). ### 3.2.3 Step #3: Initialize the Tools Namespace CNTR initializes the tool namespace by first attaching to the container specified by the user—the CNTR process forks and the child process assigns itself to the cgroup, by appropriately setting the /sys/ option, and namespace of the container, using the setns() system call. After attaching itself to the container, CNTR creates a new nested namespace, and marks all mountpoints as private so that further mount events (regarding the nested namespace) are not propagated back to the container namespace. Subsequently, CNTR creates a new filesystem hierarchy for the nested namespace, mounting the CNTRFS in a temporary mountpoint (/tmp/). Within the nested namespace, the child process mounts CNTRFS, at /tmp/, and signals the parent process (running outside of the container) to start serving requests. Signalling between the parent and child CNTR processes is implemented through a shared Unix socket. Within the nested namespace, the child process remounts all pre-existing mountpoints, from the application container, by moving them from / to /tmp/var/lib/cntr. Note that the application container is not affected by this since all mountpoints are marked as private. In addition, CNTR also mounts special container-specific files from the application over files from the tools or host (using bind mount [42]). The special files include the pseudo filesystems procfs (/proc), ensuring the tools can access the container application, and devtmpfs (/dev), containing block and character devices that have been made visible to our container. Furthermore, we bind mount a set of configuration files from the application container into the temporary directory (e.g., /etc/passwd, and /etc/hostname). Once the new filesystem hierarchy has been created in the temporary directory, CNTR atomically executes a chroot turning the temporary directory (TMP/) into the new root directory (/). To conclude the container attachment and preserve the container isolation guarantees, CNTR updates the remaining properties of the nested namespace: (1) CNTR drops the capabilities by applying the AppArmor/SELinux profile and (2) CNTR applies all the environment variables that were read from the container process; with the exception of PATH – the PATH is instead inherited from the debug container since it is often required by the tools. ### 3.2.4 Step #4: Start Interactive Shell Lastly, CNTR launches an interactive shell within the nested namespace, enabling users to execute the tools. CNTR forwards the shell I/O using a pseudo-TTY, and supports graphical interface using Unix sockets forwarding. **Shell I/O.** Interactive shells perform I/O through standard file descriptors (i.e., stdin, stdout, and stderr file descriptors) that generally refer to terminal devices. For isolation and security reasons, CNTR prevents leaking the terminal file descriptors of the host to a container by leveraging pseudo-TTYs — the pseudo-TTY acts as a proxy between the interactive shell and the user terminal device. **Unix socket forwarding.** CNTR forwards connections to Unix sockets, e.g., the X11 server socket and the D-Bus daemon running on the host. Unix sockets are also visible as files in our FUSE. However, since our FUSE has inode numbers that are different from the underlying filesystem, the kernel does not associate them with open sockets in the system. Therefore, we implemented a socket proxy that runs an efficient event loop based on epoll. It uses the splice syscall to move data between clients in the application container and servers listening on Unix sockets in the debug container/host. 3.3 Optimizations We experienced performance slowdown in CNTRFS when we measured the performance using the Phoronix benchmark suite [18] (§5.2). Therefore, we incorporated the following performance optimizations in CNTR. Caching: Read and writeback caches. The major performance improvement gain was by allowing the FUSE kernel module to cache data returned from the read requests as well as setting up a writeback buffer for the writes. CNTR avoids automatic cache invalidation when a file is opened by setting the FOPEN_KEEP_CACHE flag. Without this flag the cache cannot be effectively shared across different processes. To allow the FUSE kernel module to batch smaller write requests, we also enable the writeback cache by specifying the FUSE_WRITEBACK_CACHE flag at the mount setup time. This optimization sacrifices write consistency for performance by delaying the sync operation. However, we show that it still performs correctly according to the POSIX semantics in our regression experiments (see § 5.1). Multithreading. Since the I/O operations can block, we optimized the CNTRFS implementation to use multiple threads. In particular, CNTR spawns independent threads to read from the CNTRFS file descriptor independently to avoid contentions while processing the I/O requests. Batching. In addition to caching, we also batch operations to reduce the number of context switches. In particular, we apply the batching optimization in three places: (a) pending inode lookups, (b) forget requests, and (c) concurrent read requests. Firstly, we allow concurrent inode lookups by applying FUSE_PARALLEL_DROPS option on mount. Secondly, the operating system sends forget requests, when inodes can be freed up by CNTRFS. The kernel can batch a forget intent for multiple inodes into a single request. In CNTR we have also implemented this request type. Lastly, we set FUSE_ASYNC_READ to allow the kernel to batch multiple concurrent read requests at once to improve the responsiveness of read operations. Splicing: Read and write. Previous work suggested the use of splice reads and writes to improve the performance of FUSE [66]. The idea behind splice operation is to avoid copying data from and to userspace. CNTR uses splice for read operations. Therefore, the FUSE userspace process moves data from the source file descriptor into a kernel pipe buffer and then to the destination file descriptor with the help of the splice syscall. Since splice does not actually copy the data but instead remaps references in the kernel, it reduces the overhead. We also implemented a splice write optimization. In particular, we use a pipe as a temporary storage, where the data is part of the request, and the data is not read from a file descriptor. However, FUSE does not allow to read the request header into userspace without reading the attached data. Therefore, CNTR has to move the whole request to a kernel pipe first in order to be able to read the request header separately. After parsing the header it can move the remaining data to its designated file descriptor using the splice operation. However, this introduces an additional context switch, and slowdowns all FUSE operations since it is not possible to know in advance if the next request will be a write request. Therefore, we decided not to enable this optimization by default. 4 Implementation To ensure portability and maintainability, we decided not to rely on container-specific APIs, since they change quite often. Instead, we built our system to be as generic as possible by leveraging more stable operating system interfaces. Our system implementation supports all major container types: Docker, LXC, systemd-nspawn and rkt. CNTR’s implementation resolves container names to process ids. Process ids are handled in an implementation-specific way. On average, we changed only 70 LoCs for each container implementation to add such container-specific support for CNTR. At a high-level, our system implementation consists of the following four components: - **Container engine** (1549 LoC) analyzes the container that a user wants to attach to. The container engine also creates a nested mount namespace, where it starts the interactive shell. - **CNTRFS** (1481 LoC) to serve the files from the fat container. We implemented CNTRFS based on Rust-FUSE [33]. We extended Rust-FUSE to be able to mount across mount namespaces and without a dedicated FUSE mount executable. - A **pseudo TTY** (221 LoC) to connect the shell input/output with the user terminal. - A **socket proxy** (400 LoC) to forward the Unix socket connection between the fat (or the host) and slim containers for supporting X11 applications. All core system components of CNTR were implemented in Rust (total 3651 LoC). To simplify deployment, we do not depend on any non-Rust libraries. In this way, we can compile CNTR as a ~1.2MB single self-contained static executable by linking against musl-libc [23]. This design is imperative to ensure that CNTR can run on container-optimized Linux distributions, such as CoreOS [8] or RancherOS [30], that do not have a package manager to install additional libraries. Since CNTR makes heavy use of low-level filesystem system calls, we have also extended the Rust ecosystem with additional 46 system calls to support the complete Linux filesystem API. In particular, we extended the nix Rust library [34], a library wrapper around the Linux/POSIX API. The changes are available in our fork [29]. 5 Evaluation In this section, we present the experimental evaluation of CNTR. Our evaluation answers the following questions. 1. Is the implementation complete and correct? (§5.1) 2. What are the performance overheads and how effective are the proposed optimizations? (§5.2) 3. How effective is the approach to reducing container image sizes? (§5.3) 5.1 Completeness and Correctness We first evaluate the completeness and correctness claim of the CNTR implementation. The primary goal is to evaluate whether CNTR implements the same features as required by the underlying filesystem, and it follows the same POSIX semantics (correctness). Benchmark: xfstests regression test suite. For this experiment, we used the xfstests [14] filesystem regression test suite. The xfstests suite was originally designed for the XFS filesystem, but it is now widely used for testing all of Linux’s major filesystems. It is regularly used for quality assurance before applying changes to the filesystem code in the Linux kernel. xfstests contains tests suites to ensure correct behavior of all filesystem related system calls and their edge cases. It also includes crash scenarios and stress tests to verify if the filesystem correctly behaves under load. Further, it contains many tests for bugs reported in the past. Methodology. We extended xfstests to support mounting CNTRFS. For running tests, we mounted CNTRFS on top of tmpfs, an in-memory filesystem. We run all tests in the generic group once. Experimental results. xfstests consists of 94 unit tests that can be grouped into the following major categories: auto, quick, aio, prealloc, ioclt, and dangerous. Overall, CNTR passed 90 out of 94 (95.74%) unit tests in xfstests. Four tests failed due minor implementation details that we currently do not support. Specifically, these four unit tests were automatically skipped by xfstests because they expected our filesystem to be backed by a block device or expected some missing features in the underlying tmpfs filesystem, e.g. copy-on-write ioclt. We next explain the reasons for the failed four test cases: 1. Test #375 failed since SETGID bits were not cleared in chmod when the owner is not in the owning group of the access control list. This would require manual parsing and interpreting ACLs in CNTR. In our implementation, we delegate POSIX ACLs to the underlying filesystem by using setfsuid/setfsgid on inode creation. 2. Test #228 failed since we do not enforce the per-process file size limits (RLIMIT_FSIZE). As replay file operations and RLIMIT_FSIZE of the caller is not set or enforced in CNTRFS, this has no effect. 3. Test #391 failed since we currently do not support the direct I/O flag in open calls. The support for direct I/O and mmap in FUSE is mutually exclusive. We chose mmap here, since we need it to execute processes. In practice, this is not a problem because not all docker drivers support this feature, including the popular filesystems such as overlayfs and zfs. 4. Test #426 failed since our inodes are not exportable. In Linux, a process can get inode references from filesystems by the name_to_handle_at system call. However, our inodes are not persisted and are dynamically requested and destroyed by the operating system. If the operating system no longer uses them, they become invalid. Many container implementations block this system call as it has security implications. To summarize, the aforementioned failed test cases are specific to our current state of the implementation, and they should not affect most real-world applications. As such, these features are not required according to the POSIX standard, but, they are Linux-specific implementation details. 5.2 Performance Overheads and Optimizations We next report the performance overheads for CNTR’s split-containers approach (§5.2.1), detailed experimental results (§5.2.2), and effectiveness of the proposed optimizations (§5.2.3). Experimental testbed. To evaluate a realistic environment for container deployments [3], we evaluated the performance benchmarks using m4.xlarge virtual machine instances on Amazon EC2. The machine type has two cores of Intel Xeon E5-2686 CPU (4 hardware threads) assigned and 16GB RAM. The Linux kernel version was 4.14.13. For storage, we used a 100GB EBS volume of type GP2 formatted with ext4 filesystem mounted with default options. GP2 is an SSD-backed storage and attached via a dedicated network to the VM. Benchmark: Phoronix suite. For the performance measurement, we used the disk benchmarks [39] from the Phoronix suite [18]. Phoronix is a meta benchmark that has a wide range of common filesystem benchmarks, applications, and realistic workloads. We compiled the benchmarks with GCC 6.4 and CNTR with Rust 1.23.0. Methodology. For the performance comparison, we ran the benchmark suite once on the native filesystem (the baseline measurement) and compared the performance when we access the same filesystem through CNTRFS. The Phoronix benchmark suite runs each benchmark at least three times and automatically adds additional trials if the variance is too high. To compute the relative overheads with respect to the baseline, we computed the ratio between the native filesystem access and CNTRFS (native/cntr) for benchmarks where higher values are better (e.g. throughput), and the inverse ratio (cntr/native), where lower values are better (e.g. time required to complete the benchmark). 5.2.1 Performance Overheads We first present the summarized results for the entire benchmark suite. Thereafter, we present a detailed analysis of each benchmark individually (§5.2.2). Summary of the results. Figure 2 shows the relative performance overheads for all benchmarks in the Phoronix test suite. We have made the absolute numbers available for each benchmark on the openbenchmark platform [31]. Our experiment shows that 13 out of 20 (65%) benchmarks incur moderate overheads below 1.5× compared to the native case. In particular, three benchmarks showed significantly higher overheads, including compilebench-create (7.3×) and compilebench-read (13.3×) and the postmark benchmark (7.1×). Lastly, we also had three benchmarks, where CNTRFS was faster than the native baseline execution: FIO (0.2×), PostgreSQL Bench (0.4×) and the write workload of Threaded I/O (0.3×). To summarize, the results show the strengths and weaknesses of CNTRFS for different applications and under different workloads. At a high-level, we found that the performance of inode lookups and the double buffering in the page cache are the main performance bottlenecks in our design (much like they are for FUSE). Overall, the performance overhead of CNTR is reasonable. Importantly, note that while reporting performance numbers, we resort to the worst-case scenario for CNTR, where the “slim” application container aggressively uses the “fat” container to run an I/O-intensive benchmark suite. However, we must emphasize the primary goal of CNTR: to support auxiliary tools in uncommon operational use-cases, such as debugging or manual inspection, which are not dominated by high I/O-intensive workloads. 5.2.2 Detailed Experimental Results We next detail the results for each benchmark. AIO-Stress. AIO-Stress submits 2GB of asynchronous write requests. In theory, CNTRFS supports asynchronous requests, but only when the filesystem operates in the direct I/O mode. However, the direct I/O mode in CNTRFS restricts the mmap system call, which is required by executables. Therefore, all requests are, in fact, processed synchronously resulting in 2.6× slowdown. Apache Web server. The Apache Web server benchmark issues 100K http requests for test files (average size of 3KB), where we noticed a slowdown of up to 1.5×. However, the bottleneck was not due to serving the actual content, but due to writing of the webserver access log, which triggers small writes (<100 bytes) for each request. These small requests trigger lookups in CNTRFS of the extended attributes security.capabilities, since the kernel currently neither caches such attributes nor it provides an option for caching them. Compilebench. Compilebench simulates different stages in the compilation process of the Linux kernel. There are three variants of the benchmark: (a) the compile stage compiles a kernel module, (b) the read tree stage reads a source tree recursively, and lastly, (c) the initial creation stage simulates a tarball unpack. In our experiments, Compilebench has the highest overhead of all benchmarks with the read tree stage being the slowest (13.4×). This is due to the fact that inode lookups in CNTRFS are slower compared to the native filesystem: for every lookup, we need one open() system call to get a file handle to the inode, followed by a stat() system call to check if we already have lookup-ed this inode in a different path due hardlinks. Usually, after the first lookup, this information can be cached in the kernel, but in this benchmark for every run, a different source tree with many files are read. The slowdown of Figure 2: Relative performance overheads of CNTR for the Phoronix suite. The absolute values for each benchmark is available online on the openbenchmark platform [31]. lookups for the other two variants, namely the compile stage (2.3×) and initial create (7.3×) is lower, since they are shadowed by write operations. **Dbench.** Dbench simulates a file server workload, and it also simulates clients reading files and directories with increasing concurrency. In this benchmark, we noticed that with increasing number of clients, CNTRFS is able to cache directories and file contents in the kernel. Therefore, CNTRFS does not incur performance overhead over the native baseline. **FS-Mark.** FS–Mark sequentially creates 1000 1MB files. Since the write requests are reasonably large (16 KB per write call) and the workload is mostly disk bound. Therefore, there is no difference between CNTRFS and ext4. **FIO benchmark.** The FIO benchmark profiles a filesystem and measures the read/write bandwidth, where it issues 80% random reads and 20% random writes for 4GB data with an average blocksize of 140KB. For this benchmark, CNTRFS outperforms the native filesystem by a factor of 4× since the writeback cache leads to fewer and larger writes to the disk compared to the underlying filesystem. **Gzip benchmark.** The Gzip benchmark reads a 2GB file containing only zeros and writes the compressed version of it back to the disk. Even though the file is highly compressible, gzip compresses the file slower than the data access in CNTRFS or ext4. Therefore, there was no significant performance difference between CNTR and the native version. **IOZone benchmark.** IOZone performs sequential writes followed by sequential reads of a blocksize of 4KB. For the write requests, as in the apache benchmark, CNTR incurs low overhead (1.2×) due to extended attribute lookup overheads. Whereas, for the sequential read request, both filesystems (underlying native filesystem and CNTRFS) can mostly serve the request from the page cache. For smaller read sizes (4GB) the read throughput is comparable for both CNTRFS and ext4 filesystems because the data fits in the page cache. However, a larger workload (8GB) no longer fits into the page cache of CNTRFS and degrades the throughput significantly. **Postmark mailserver benchmark.** Postmark simulates a mail server that randomly reads, appends, creates or deletes small files. In this benchmark, we observed higher overhead (7.1×) for CNTR. In this case, inode lookups in CNTRFS dominated over the actual I/O because the files were deleted even before they were sync-ed to the disk. **PGBench – PostgreSQL Database Server.** PGBench is based on the PostgreSQL database server. It simulates both read and writes under normal database load. Like FIO, CNTRFS was faster in this benchmark also, since PGBench flushes the writeback buffer less often. **SQLite benchmark.** The SQLite benchmark measures the time needed to insert 1000 rows in a SQL table. We observed a reasonable overhead (1.9×) for CNTR, since each insertion is followed by a filesystem sync, which means that we cannot make efficient use of our disk cache. **Threaded I/O benchmark.** The Threaded I/O benchmark separately measures the throughput of multiple concurrent readers and writers to a 64MB file. We observed good performance for reads (1.1×) and even better performance for writes (0.3×). This is due to the fact that the reads can be mostly served from the page cache, and for the writes, our writeback buffer in the kernel holds the data longer than the underlying filesystem. **Linux Tarball workload.** The Linux tarball workload unpacks the kernel source code tree from a compressed tarball. This workload is similar to the create stage of the compilebench benchmark. However, since the source is read from a single tarball instead of copying an already unpacked directory, there are fewer lookups performed in CNTRFS. Therefore, we incur relatively lower overhead (1.2×) even though many small files are created in the unpacking process. ### 5.2.3 Effectiveness of Optimizations We next evaluate the effectiveness of the proposed optimizations in CNTR (as described in §3.3). Read cache. The goal of this optimization is to allow the kernel to cache pages across multiple processes. Figure 3 (a) shows the effectiveness of the proposed optimization for OPEN_KEEP_CACHE; we observed 10× higher throughput with OPEN_KEEP_CACHE for concurrent reads with 4 threads for the Threaded I/O benchmark. Writeback cache. The writeback optimization was designed to reduce the amount of write requests by maintaining a kernel-based write cache. Figure 3 (b) shows the effectiveness of the optimization: CNT can achieve 65% more write throughput with the writeback cache enabled compared to the native I/O performance for sequential writes for the IOZone benchmark. Multithreading. We made CNTFS multi-threaded to improve responsiveness when the filesystem operations block. While threads improve the responsiveness, their presence hurts throughput as measured in Figure 4 (up to 8% for sequential read in IOZone). However, we still require multithreading to cope with blocking filesystem operations. Batching. To improve the directory and inode lookups, we batched requests to kernel by specifying the PARALLEL_DIREOPS flag. We observed a speedup of 2.5× in the compilebench read benchmark with this optimization (Figure 3 (c)). Splice read. Instead of copying memory into userspace, we move the file content with the splice() syscall in the kernel to achieve zero-copy I/O. Unfortunately, we did not notice any significant performance improvement with the splice read optimization. For instance, the sequential read throughput in IOZone improved slightly by just 5% as shown in Figure 3 (d). 5.3 Effectiveness of CNT To evaluate the effectiveness of CNT’s approach to reducing the image sizes, we used a tool called Docker Slim [11]. Docker Slim applies static and dynamic analyses to build a smaller-sized container image that only contains the files that are actually required by the application. Under the hood, Docker Slim records all files that have been accessed during a container run in an efficient way using the f/anotif y kernel module. For our analysis, we extended Docker Slim to support container links, which are extensively used for service discovery and it is available as a fork [28]. Dataset: Docker Hub container images. For our analysis, we chose the Top-50 popular official container images hosted on Docker Hub [10]. These images are maintained by Docker and contain commonly used applications such as web servers, databases and web applications. For each image, Docker provides different variants of Linux distributions as the base image. We used the variant set to be default as specified by the developer. Note that Docker Hub also hosts container images that are not meant to be used directly for deploying applications, but they are meant to be used as base images to build applications (such as language SDKs or Linux distributions). Since CNT targets concrete containerized applications, we did not include such base images in our evaluation. Methodology. For our analysis, we instrumented the Docker container with Docker Slim and manually ran the application so it would load all the required files. Thereafter, we build new smaller containers using Docker Slim. These new smaller images are equivalent to containers that developers could have created when having access to CNT. We envision the developers will be using a combination of CNT and tools such as Docker Slim to create smaller container images. Lastly, we tested to validate that the smaller containers still provide the same functionality. Experimental results. On average, we could reduce the size by 66.6% for the Top-50 Docker images. Figure 5 depicts the histogram plot showcasing percentage of container size that could be removed in this process. For over 75% of all containers, the reduction in size was between 60% and 97%. Beside the applications, these containers are packaged with common used command line auxiliary tools, such as coreutils, shells, and package managers. For only 6 out of 50 (12%) containers, the reduction was below 10%. We inspected these 6 images and found out they contain only single executables written in Go and a few configuration files. 6 Related Work In this section, we survey the related work in the space of lightweight virtualization. **Lambda functions.** Since the introduction of AWS Lambda [1], all major cloud computing providers offer serverless computing, including Google Cloud Functions [15], Microsoft Azure Functions [5], IBM OpenWhisk [20]. Moreover, there exists a research implementation called Open Lambda [55]. In particular, serverless computing offers a small language runtime rather than the full-blown container image. Unfortunately, lambdas offer limited or no support for interactive debugging or profiling purposes [63] because the clients have no access to the lambda’s container or container-management system. In contrast, the goal of the CNTR is to aim for lightweight containers, in the same spirit of lambda functions, but to also provide an on-demand mechanism for auxiliary tools for debugging, profiling, etc. As a future work, we plan to support auxiliary tools for lambda functions [43] using CNTR. **Microkernels.** The microkernel architecture [54, 46, 56] shares a lot of commonalities with the CNTR architecture, where the applications/services are horizontally partitioned and the communication happens via the inter-process communication (IPC) mechanism. In CNTR, the application container obtains additional service by communicating with the “fat” container via IPC using CNTRFS. **Containers.** Recently, there has been a lot of interest in reducing the size of containers, but still allowing access to the rich set of auxiliary tools. For instance, Toolbox [35] in CoreOS [7] allows to bind the mount of the host filesystem in a container to administrate or debug the host system with installing the tools inside the container. In contrast to Toolbox, CNTR allows bidirectional access with the debug container. Likewise, namespaces [27] allows entering into existing container namespaces, and spawning a process into a new set of namespaces. However, namespaces only covers namespaces, and it does not provide the rich set of filesystem APIs as provided by CNTR. Lastly, Slacker [53] proposed an opportunistic model to pull images from registries to reduce the startup times. In particular, Slacker can skip downloading files that are never requested by the filesystem. Interestingly, one could also use Slacker to add auxiliary tools such as gdb to the container in an “on-demand” fashion. However, Slacker could support additional auxiliary tools to a container, but these tools would be only downloaded to the container host, if the container is started by the user. Furthermore, Slacker also has a longer build time and greater storage requirements in the registry. In contrast, CNTR offers a generic lightweight model for the additional auxiliary tools. **Virtual machines.** Virtual machines [25, 47, 51] provide stronger isolation compared to containers by running applications and the OS as a single unit. On the downside, full-fledged VMs are not scalable and resource-efficient [62]. To strike a balance between the advantages of containers and virtual machines, Intel Clear Containers (or Kata Containers) [21] and SCONE [45] offer stronger security properties for containers by leveraging Intel VT and Intel SGX, respectively. Likewise, LightVM [59] uses unikernel and optimized Xen to offer lightweight VMs. In a similar vein, CNTR allows creating lightweight containers, which are extensively used in the data center environment. **Unikernels and Library OSes.** Unikernels [57, 58] leverage library OSes [61, 49, 65, 48] to selectively include only those OS components required to make an application work in a single address space. Unikernels use a fraction of the resources required compared to full, multipurpose operating systems. However, Unikernels also face a similar challenge as containers — If Unikernels need additional auxiliary tools, they must be statically linked in the final image as part of the library OS. Moreover, unikernel approach is orthogonal since it targets the kernel overhead, whereas CNTR targets the tools overhead. 7 Conclusion We presented CNTR, a system for building and deploying lightweight OS containers. CNTR partitions existing containers into two parts: “slim” (application) and “fat” (additional tools). CNTR efficiently enables the application container to dynamically expand with additional tools in an on-demand fashion at runtime. Further, CNTR enables a set of new development workflows with containers: - When testing the configuration changes, instead of rebuilding containers from scratch, the developers can use their favorite editor to edit files in place and reload the service. - Debugging tools no longer have to be manually installed in the application container, but can be placed in separate debug images for debugging or profiling in production. To the best of our knowledge, CNTR is the first generic and complete system that allows attaching to container and inheriting all its sandbox properties. We have used CNTR to debug existing container engines [32]. In our evaluation, we have extensively tested the completeness, performance, and effectiveness properties of CNTR. We plan to further extend our evaluation to include the nested container design. **Software availability.** We have made CNTR along with the complete experimental setup publicly available [6]. **Acknowledgments.** We thank our shepherd Swaminathan Sundaraman and the anonymous reviewers for their helpful comments. The work is supported in part by the Alan Turing Institute and an Amazon Web Services Education Grant. References [34] Rust library that wraps around the Linux/Posix API. https://github.com/nix-rust/nix.
{"Source-Url": "https://www.research.ed.ac.uk/portal/files/70295743/CNTR_Lightweight_OS_Containers.pdf", "len_cl100k_base": 11137, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 50366, "total-output-tokens": 14942, "length": "2e13", "weborganizer": {"__label__adult": 0.00033593177795410156, "__label__art_design": 0.0005745887756347656, "__label__crime_law": 0.0002875328063964844, "__label__education_jobs": 0.000804901123046875, "__label__entertainment": 0.0001302957534790039, "__label__fashion_beauty": 0.0001628398895263672, "__label__finance_business": 0.0003964900970458984, "__label__food_dining": 0.0003037452697753906, "__label__games": 0.0006818771362304688, "__label__hardware": 0.0021152496337890625, "__label__health": 0.0003561973571777344, "__label__history": 0.0004277229309082031, "__label__home_hobbies": 0.00011748075485229492, "__label__industrial": 0.0005660057067871094, "__label__literature": 0.0002772808074951172, "__label__politics": 0.00030493736267089844, "__label__religion": 0.0004572868347167969, "__label__science_tech": 0.1533203125, "__label__social_life": 0.0001227855682373047, "__label__software": 0.0308990478515625, "__label__software_dev": 0.80615234375, "__label__sports_fitness": 0.00021505355834960935, "__label__transportation": 0.0005464553833007812, "__label__travel": 0.00021529197692871096}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63409, 0.02434]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63409, 0.22087]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63409, 0.87942]], "google_gemma-3-12b-it_contains_pii": [[0, 1202, false], [1202, 5404, null], [5404, 10998, null], [10998, 16652, null], [16652, 20413, null], [20413, 26098, null], [26098, 31587, null], [31587, 36944, null], [36944, 40825, null], [40825, 44870, null], [44870, 49064, null], [49064, 54653, null], [54653, 57512, null], [57512, 61861, null], [61861, 63409, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1202, true], [1202, 5404, null], [5404, 10998, null], [10998, 16652, null], [16652, 20413, null], [20413, 26098, null], [26098, 31587, null], [31587, 36944, null], [36944, 40825, null], [40825, 44870, null], [44870, 49064, null], [49064, 54653, null], [54653, 57512, null], [57512, 61861, null], [61861, 63409, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63409, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63409, null]], "pdf_page_numbers": [[0, 1202, 1], [1202, 5404, 2], [5404, 10998, 3], [10998, 16652, 4], [16652, 20413, 5], [20413, 26098, 6], [26098, 31587, 7], [31587, 36944, 8], [36944, 40825, 9], [40825, 44870, 10], [44870, 49064, 11], [49064, 54653, 12], [54653, 57512, 13], [57512, 61861, 14], [61861, 63409, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63409, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
cf616b79f84a1779205b614789921e808a10445b
DESCRIPTION For JSS's full CfP including information on Special Issues, Industry, Trends, and Journal First tracks please continue to read for further details. The Journal of Systems and Software publishes papers covering all aspects of software engineering. All articles should provide evidence to support their claims, e.g. through empirical studies, simulation, formal proofs or other types of validation. Topics of interest include, but are not limited to: - Methods and tools for software requirements, design, architecture, verification and validation, testing, maintenance and evolution - Agile, model-driven, service-oriented, open source and global software development - Approaches for cloud/fog/edge computing and virtualized systems - Human factors and management concerns of software development - Artificial Intelligence, data analytics and big data applied in software engineering - Metrics and evaluation of software development resources - DevOps, continuous integration, build and test automation - Business and economic aspects of software development - Software Engineering education The journal welcomes reports of practical experience for all of these topics, as well as replication studies and studies with negative results. The journal appreciates the submission of systematic literature reviews, mapping studies and meta-analyses. However, these should report interesting and important results, rather than merely providing statistics on publication year, venue etc. In addition to regular papers, JSS features two special tracks (In Practice, New Ideas and Trends Papers), as well as special issues. In Practice is exclusively focused on work that increases knowledge transfer from industry to research. It accepts: (1) Applied Research Reports were we invite submissions that report results (positive or negative) concerning the experience of applying/evaluating systems and software technologies (methods, techniques and tools) in real industrial settings. These comprise empirical studies conducted in industry (e.g., action research, case studies) or experience reports that may help understanding situations in which technologies really work and their impact. Submissions should include information on the industrial setting, provide motivation, explain the events leading to the outcomes, including the challenges faced, summarize the outcomes, and conclude with lessons learned, take-away messages, and practical advice based on the described experience. At least one contributing author must be from industry. (2) Practitioner Insights were we invite experience reports showing what actually happens in practical settings, illustrating the challenges (and pain) that practitioners face, and presenting lessons learned. Problem descriptions with significant details on the context, underlying causes and symptoms, and technical and organizational impact are also welcome. Practitioner insights papers may also comprise invited opinionated views on the evolution of chosen topic areas in practice. Submissions in this category are limited to four pages and the first author must be from industry. Finally, submissions to this track should be within scope of the journal's above topics of interest and they will be evaluated through industry-appropriate criteria for their merit in reporting useful industrial experience rather than in terms of academic novelty of research results. New Ideas and Trends Papers New ideas, especially those related to new research trends, emerge quickly. To accommodate timely dissemination thereof, JSS introduces the New Ideas and Trends Paper (NITP). NITPs should focus on the systems/software engineering aspects of new emerging areas, including: the internet of things, big data, cloud computing, software ecosystems, cyber-physical systems, green/sustainable systems, continuous software engineering, crowdsourcing, and the like. We distinguish two types of NITPs: A short paper that discusses a single contribution to a specific new trend or a new idea. A long paper that provides a survey of a specific trend, as well as a (possibly speculative) outline of a solution. NITPs are not required to be fully validated, but preliminary results that endorse the merit of the proposed ideas are welcomed. We anticipate revisiting specific new trends periodically, for instance through reflection or progress reports. New Ideas and Trend Papers warrant speedy publication. Special Issues proposals To submit a proposal for a special issue, please contact the Special Issues Editor Prof. W.K. Chan Journal First Initiative Authors of JSS accepted papers have the opportunity to present their work in those conferences that offer a Journal First track. Using this track, researchers may take the best from two worlds: ensuring high quality in the JSS publication (thorough, multi-phase review process of a long manuscript), while getting feedback from a community of experts and fostering possible collaborations during a scientific event. Details may vary from conference to conference, but generally speaking, JSS papers to be presented in a Journal First track must report completely new research results or present novel contributions that significantly extend previous work. The ultimate decision to include a paper in the conference program is up to the conference chairs, not JSS. A JSS paper may be presented only once through a Journal First track. As of today, the list of conferences with which JSS is collaborating, or has collaborated, through a Journal First track, is: ASE,ICSME, SANER, RE, ESEM, PROFES, and APSEC. AUDIENCE IMPACT FACTOR 2018: 2.559 © Clarivate Analytics Journal Citation Reports 2019 ABSTRACTING AND INDEXING Science Citation Index Expanded Current Contents - Engineering, Technology & Applied Sciences Research Alert Web of Science ABI/Inform Cambridge Scientific Abstracts Computer Literature Index Computer Reviews Engineering Index Current Contents Computer Abstracts CAD/CAM Abstracts INSPEC Scopus EDITORIAL BOARD Editor-in-Chief P. Avgeriou, University of Groningen, Groningen, the Netherlands D. Shepherd, Virginia Commonwealth University Department of Mathematics and Applied Mathematics, Charlottesville, United States Emeritus Editor-in-Chief H. van Vliet, Free University of Amsterdam Department of Computer Science, Amsterdam, Netherlands Special Issues Editor W.K. Chan, City University of Hong Kong Department of Computer Science, Kowloon, Hong Kong R. Mirandola, Polytechnic of Milan, Milano, Italy In Practice Editors Marcos Kalinowski, Pontifical Catholic University of Rio de Janeiro, Rio De Janeiro, Brazil Daniel Méndez, Blekinge Institute of Technology Department of Technology and Research, Karlskrona, Sweden Senior Associate Editors D-H. Bae, Korea Advanced Institute of Science and Technology School of Computing, Daejeon, Korea, Republic of E. Barr, University College London, London, United Kingdom G. Bavota, University of Italian Switzerland, Lugano, Switzerland A. Bertolino, Alessandro Faedo Institute of Science and Technology National Research Council, Pisa, Italy A. Chatzigeorgiou, University of Macedonia Department of Applied Informatics, Thessaloniki, Greece D. Damian, University of Victoria Department of Computer Science, Victoria, British Columbia, Canada J.C. Dueñas, Polytechnic University of Madrid Telematic Systems Engineering Department, Madrid, Spain N.A. Ernst, University of Victoria Department of Computer Science, Victoria, British Columbia, Canada P. Lago, VU Amsterdam, Amsterdam, Netherlands S. Mcintosh, McGill University, Montréal, Quebec, Canada B. Turhan, Monash University Faculty of Information Technology, Clayton, Australia W.E. Wong, University of Texas at Dallas Erik Jonsson School of Engineering and Computer Science, Richardson, Texas, United States Publicity Chair P. Leitner, University of Gothenburg, Goteborg, Sweden http://philippleitner.net Journal-First Chair X. Franch, Polytechnic University of Catalonia Department of Service and Information System Engineering, Barcelona, Spain Editorial Advisor D. Spinellis, Athens University of Economics and Business Department of Management Science and Technology, Athens, Greece Associate Editors B. Adams, Montreal Polytechnic, Montreal, Quebec, Canada E. Almeida, Federal University of Bahia Institute of Mathematics, SALVADOR, Brazil S. Beecham, Lero - the Irish Software Research Centre, University of Limerick, Limerick, Ireland K. Blincoe, The University of Auckland Department of Electrical Computer and Software Engineering, Auckland, New Zealand R. Hoda, The University of Auckland Department of Electrical Computer and Software Engineering, Auckland, New Zealand H. Koziolek, ABB Corporate Research United States Raleigh NC, Raleigh, North Carolina, United States L.L. Minku, University of Birmingham School of Computer Science, Birmingham, United Kingdom M. Nagappan, University of Waterloo David R Cheriton School of Computer Science, Waterloo, Ontario, Canada M. Pinzger, Universität Klagenfurt, Software Engineering Research Group, Klagenfurt, Austria R. Robbes, University of Chile, Santiago de Chile, Chile A. Serebrenik, University of Technology Eindhoven, Eindhoven, Netherlands A Zaidman, Delft University of Technology Faculty of Electrical Engineering Mathematics and Computer Science, Delft, Netherlands Editorial Board L. Abeni, Sant’Anna School of Advanced Studies, Pisa, Italy A. Aleti, Monash University Faculty of Information Technology, Clayton, Australia J. Bosch, Chalmers University of Technology, Gothenburg, Sweden A. Bosu, Wayne State University, Detroit, Michigan, United States M. Bruntink, Software Improvement Group, Amsterdam, Netherlands Y. Cai, Chinese Academy of Sciences, Beijing, China Y.F. Cai, Drexel University, Philadelphia, Pennsylvania, United States M. Cataldo, Robert Bosch LLC Pittsburgh, Pittsburgh, Pennsylvania, United States S. Cha, Korea University Department of Computer Science and Engineering, Seongbuk-gu, Korea, Republic of V. Chandrasekhar, Facebook Inc, Menlo Park, California, United States Y. Chen, Arizona State University, Tempe, Arizona, United States F. Chicano, University of Málaga, Malaga, Spain I. Crnkovic, Chalmers University of Technology Department of Computer Science and Engineering, Göteborg, Sweden F. Cuadrado, Queen Mary University of London, London, United Kingdom K. Dameski, Virginia Commonwealth University, Richmond, Virginia, United States M. Daneva, University of Twente, Enschede, Netherlands S. Dash, University of Surrey, Guildford, United Kingdom C. Ebert, Vector Software GmbH, Stuttgart, Germany A.R. Fasolino, University of Naples Federico II Department of Electrical Engineering and Information Technology, Napoli, Italy A. Garcia, Pontifical Catholic University of Rio de Janeiro Department of Information Technology, RIO DE JANEIRO, Brazil I. Gorton, Khoury College of Computer Sciences Seattle, Seattle, Washington, United States L. Guan, Loughborough University, Loughborough, United Kingdom M. Harman, Facebook London, London, United Kingdom J.-M. Jézéquel, Institute for Research in Computer Science and Random Systems, Rennes, France H.D. Karatza, Aristotle University of Thessaloniki School of Informatics, Thessaloniki, Greece K. Kritikos, Foundation of Research and Technology Hellas, Irakleio, Greece P.B. Kruchten, The University of British Columbia, Vancouver, British Columbia, Canada M. Linares-Vásquez, University of the Andes, Bogota, Colombia X. Liu, Deakin University School of Information Technology - Burwood Campus, Burwood, Australia J. Lu, Nanjing University, Nanjing, China H. Mei, Peking University, Beijing, China T. Menzies, North Carolina State University, Raleigh, North Carolina, United States M. Mirakhorli, Rochester Institute of Technology, Rochester, New York, United States TV Nguyen, University of Nebraska-Lincoln, Lincoln, Nebraska, United States F. Palomba, University of Salerno Department of Informatics, Fisciano, Italy C. Roy, University of Saskatchewan Department of Computer Science, Saskatoon, Canada H. Sharp, The Open University, Milton Keynes, United Kingdom I. Stamelos, Aristotle University of Thessaloniki, Thessaloniki, Greece K. J. Stol, University College Cork and Lero, Cork, Ireland Ö. Ulusoy, Bilkent University, Ankara, Turkey J. Wang, National Laboratory for Parallel and Distributed Processing, China D. Weyns, KU Leuven Association, Belgium and Linnaeus University Sweden, Belgium U. Zdun, University of Vienna, Wien, Austria H. Zhang, The University of Newcastle School of Electrical Engineering and Computing, Australia T. Zimmermann, Microsoft Research, Redmond, Washington, United States A. Zisman, The Open University Computing and Communications Department, Milton Keynes, United Kingdom GUIDE FOR AUTHORS Your Paper Your Way We now differentiate between the requirements for new and revised submissions. You may choose to submit your manuscript as a single Word or PDF file to be used in the refereeing process. Only when your paper is at the revision stage, will you be requested to put your paper in to a 'correct format' for acceptance and provide the items required for the publication of your article. To find out more, please visit the Preparation section below. Submission checklist You can use this list to carry out a final check of your submission before you send it to the journal for review. Please check the relevant section in this Guide for Authors for more details. Ensure that the following items are present: One author has been designated as the corresponding author with contact details: • E-mail address • Full postal address All necessary files have been uploaded: Manuscript: • Include keywords • All figures (include relevant captions) • All tables (including titles, description, footnotes) • Ensure all figure and table citations in the text match the files provided • Indicate clearly if color should be used for any figures in print Graphical Abstracts / Highlights files (where applicable) Supplemental files (where applicable) Further considerations • Manuscript has been 'spell checked' and 'grammar checked' • All references mentioned in the Reference List are cited in the text, and vice versa • Permission has been obtained for use of copyrighted material from other sources (including the Internet) • A competing interests statement is provided, even if the authors have no competing interests to declare • Journal policies detailed in this guide have been reviewed • Referee suggestions and contact details provided, based on journal requirements For further information, visit our Support Center. BEFORE YOU BEGIN Ethics in publishing Please see our information pages on Ethics in publishing and Ethical guidelines for journal publication. Declaration of interest All authors must disclose any financial and personal relationships with other people or organizations that could inappropriately influence (bias) their work. Examples of potential conflicts of interest include employment, consultancies, stock ownership, honoraria, paid expert testimony, patent applications/registrations, and grants or other funding. Authors should complete the declaration of interest statement using this template and upload to the submission system at the Attach/Upload Files step. If there are no interests to declare, please choose: 'Declarations of interest: none' in the template. This statement will be published within the article if accepted. More information. Submission declaration and verification Submission of an article implies that the work described has not been published previously (except in the form of an abstract, a published lecture or academic thesis, see 'Multiple, redundant or concurrent publication' for more information), that it is not under consideration for publication elsewhere, that its publication is approved by all authors and tacitly or explicitly by the responsible authorities where the work was carried out, and that, if accepted, it will not be published elsewhere in the same form, in English or in any other language, including electronically without the written consent of the copyright-holder. To verify originality, your article may be checked by the originality detection service Crossref Similarity Check. Preprints Please note that preprints can be shared anywhere at any time, in line with Elsevier's sharing policy. Sharing your preprints e.g. on a preprint server will not count as prior publication (see 'Multiple, redundant or concurrent publication' for more information). Use of inclusive language Inclusive language acknowledges diversity, conveys respect to all people, is sensitive to differences, and promotes equal opportunities. Articles should make no assumptions about the beliefs or commitments of any reader, should contain nothing which might imply that one individual is superior to another on the grounds of race, sex, culture or any other characteristic, and should use inclusive language throughout. Authors should ensure that writing is free from bias, for instance by using 'he or she', 'his/her' instead of 'he' or 'his', and by making use of job titles that are free of stereotyping (e.g. 'chairperson' instead of 'chairman' and 'flight attendant' instead of 'stewardess'). Author contributions For transparency, we encourage authors to submit an author statement file outlining their individual contributions to the paper using the relevant CRediT roles: Conceptualization; Data curation; Formal analysis; Funding acquisition; Investigation; Methodology; Project administration; Resources; Software; Supervision; Validation; Visualization; Roles/Writing - original draft; Writing - review & editing. Authorship statements should be formatted with the names of authors first and CRediT role(s) following. More details and an example Changes to authorship Authors are expected to consider carefully the list and order of authors before submitting their manuscript and provide the definitive list of authors at the time of the original submission. Any addition, deletion or rearrangement of author names in the authorship list should be made only before the manuscript has been accepted and only if approved by the journal Editor. To request such a change, the Editor must receive the following from the corresponding author: (a) the reason for the change in author list and (b) written confirmation (e-mail, letter) from all authors that they agree with the addition, removal or rearrangement. In the case of addition or removal of authors, this includes confirmation from the author being added or removed. Only in exceptional circumstances will the Editor consider the addition, deletion or rearrangement of authors after the manuscript has been accepted. While the Editor considers the request, publication of the manuscript will be suspended. If the manuscript has already been published in an online issue, any requests approved by the Editor will result in a corrigendum. Article transfer service This journal is part of our Article Transfer Service. This means that if the Editor feels your article is more suitable in one of our other participating journals, then you may be asked to consider transferring the article to one of those. If you agree, your article will be transferred automatically on your behalf with no need to reformat. Please note that your article will be reviewed again by the new journal. More information. Copyright Upon acceptance of an article, authors will be asked to complete a 'Journal Publishing Agreement' (see more information on this). An e-mail will be sent to the corresponding author confirming receipt of the manuscript together with a 'Journal Publishing Agreement' form or a link to the online version of this agreement. Subscribers may reproduce tables of contents or prepare lists of articles including abstracts for internal circulation within their institutions. Permission of the Publisher is required for resale or distribution outside the institution and for all other derivative works, including compilations and translations. If excerpts from other copyrighted works are included, the author(s) must obtain written permission from the copyright owners and credit the source(s) in the article. Elsevier has preprinted forms for use by authors in these cases. For gold open access articles: Upon acceptance of an article, authors will be asked to complete an 'Exclusive License Agreement' (more information). Permitted third party reuse of gold open access articles is determined by the author's choice of user license. Author rights As an author you (or your employer or institution) have certain rights to reuse your work. More information. Elsevier supports responsible sharing Find out how you can share your research published in Elsevier journals. Role of the funding source You are requested to identify who provided financial support for the conduct of the research and/or preparation of the article and to briefly describe the role of the sponsor(s), if any, in study design; in the collection, analysis and interpretation of data; in the writing of the report; and in the decision to submit the article for publication. If the funding source(s) had no such involvement then this should be stated. Open access Please visit our Open Access page for more information. Elsevier Researcher Academy Researcher Academy is a free e-learning platform designed to support early and mid-career researchers throughout their research journey. The "Learn" environment at Researcher Academy offers several interactive modules, webinars, downloadable guides and resources to guide you through the process of writing for research and going through peer review. Feel free to use these free resources to improve your submission and navigate the publication process with ease. Language (usage and editing services) Please write your text in good English (American or British usage is accepted, but not a mixture of these). Authors who feel their English language manuscript may require editing to eliminate possible grammatical or spelling errors and to conform to correct scientific English may wish to use the English Language Editing service available from Elsevier's Author Services. Submission Submission to this journal proceeds totally online. Use the following guidelines to prepare your article. Via the homepage of this journal (http://ees.elsevier.com/jss) you will be guided stepwise through the creation and uploading of the various files. The system automatically converts source files to a single Adobe Acrobat PDF version of the article, which is used in the peer-review process. Please note that even though manuscript source files are converted to PDF at submission for the review process, these source files are needed for further processing after acceptance. All correspondence, including notification of the Editor's decision and requests for revision, takes place by e-mail and via the author's homepage, removing the need for a hard-copy paper trail. Preparation NEW SUBMISSIONS Submission to this journal proceeds totally online and you will be guided stepwise through the creation and uploading of your files. The system automatically converts your files to a single PDF file, which is used in the peer-review process. As part of the Your Paper Your Way service, you may choose to submit your manuscript as a single file to be used in the refereeing process. This can be a PDF file or a Word document, in any format or layout that can be used by referees to evaluate your manuscript. It should contain high enough quality figures for refereeing. If you prefer to do so, you may still provide all or some of the source files at the initial submission. Please note that individual figure files larger than 10 MB must be uploaded separately. In addition to regular papers, JSS features two special tracks (In Practice, New Ideas and Trends Papers), as well as special issues. In Practice In Practice is exclusively focused on work that increases knowledge transfer from industry to research. It accepts: (1) Applied Research Reports were we invite submissions that report results (positive or negative) concerning the experience of applying/evaluating systems and software technologies (methods, techniques and tools) in real industrial settings. These comprise empirical studies conducted in industry (e.g., action research, case studies) or experience reports that may help understanding situations in which technologies really work and their impact. Submissions should include information on the industrial setting, provide motivation, explain the events leading to the outcomes, including the challenges faced, summarize the outcomes, and conclude with lessons learned, take-away messages, and practical advice based on the described experience. At least one contributing author must be from industry. (2) Practitioner Insights were we invite experience reports showing what actually happens in practical settings, illustrating the challenges (and pain) that practitioners face, and presenting lessons learned. Problem descriptions with significant details on the context, underlying causes and symptoms, and technical and organizational impact are also welcome. Practitioner insights papers may also comprise invited opinionated views on the evolution of chosen topic areas in practice. Submissions in this category are limited to four pages and the first author must be from industry. Finally, submissions to this track should be within scope of the journal's above topics of interest and they will be evaluated through industry-appropriate criteria for their merit in reporting useful industrial experience rather than in terms of academic novelty of research results. **New Trends and Ideas Papers** New ideas, especially those related to new research trends, emerge quickly. To accommodate timely dissemination thereof, JSS introduces the New Ideas and Trends Paper (NITP). NITPs should focus on the systems/software engineering aspects of new emerging areas, including: the internet of things, big data, cloud computing, software ecosystems, cyber-physical systems, green/sustainable systems, continuous software engineering, crowdsourcing, and the like. We distinguish two types of NITPs: A short paper that discusses a single contribution to a specific new trend or a new idea. A long paper that provides a survey of a specific trend, as well as a (possibly speculative) outline of a solution. NITPs are not required to be fully validated, but preliminary results that endorse the merit of the proposed ideas are welcome. We anticipate revisiting specific new trends periodically, for instance through reflection or progress reports. New Ideas and Trend Papers warrant speedy publication. **References** There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct. **Length** It is encouraged that authors submit full-length papers of less than 36 pages single-column or 18 pages double-column. If your manuscript is longer, please include an explanation in your submission as to why the length is justified. For review articles, while there is no page count, authors should ensure that the length of the paper is justified by its content. **Formatting requirements** There are no strict formatting requirements but all manuscripts must contain the essential elements needed to convey your manuscript, for example Abstract, Keywords, Introduction, Materials and Methods, Results, Conclusions, Artwork and Tables with Captions. If your article includes any Videos and/or other Supplementary material, this should be included in your initial submission for peer review purposes. Divide the article into clearly defined sections. **Figures and tables embedded in text** Please ensure the figures and the tables included in the single file are placed next to the relevant text in the manuscript, rather than at the bottom or the top of the file. The corresponding caption should be placed directly below the figure or table. **Peer review** This journal operates a single blind review process. All contributions will be initially assessed by the editor for suitability for the journal. Papers deemed suitable are then typically sent to a minimum of two independent expert reviewers to assess the scientific quality of the paper. The Editor is responsible for the final decision regarding acceptance or rejection of articles. The Editor's decision is final. More information on types of peer review. **REVISED SUBMISSIONS** Use of word processing software Regardless of the file format of the original submission, at revision you must provide us with an editable file of the entire article. Keep the layout of the text as simple as possible. Most formatting codes will be removed and replaced on processing the article. The electronic text should be prepared in a way very similar to that of conventional manuscripts (see also the Guide to Publishing with Elsevier). See also the section on Electronic artwork. To avoid unnecessary errors you are strongly advised to use the 'spell-check' and 'grammar-check' functions of your word processor. LaTeX You are recommended to use the Elsevier article class elsarticle.cls to prepare your manuscript and BibTeX to generate your bibliography. Our LaTeX site has detailed submission instructions, templates and other information. Article structure Subdivision - numbered sections Divide your article into clearly defined and numbered sections. Subsections should be numbered 1.1 (then 1.1.1, 1.1.2, ...), 1.2, etc. (the abstract is not included in section numbering). Use this numbering also for internal cross-referencing: do not just refer to 'the text'. Any subsection may be given a brief heading. Each heading should appear on its own separate line. Introduction State the objectives of the work and provide an adequate background, avoiding a detailed literature survey or a summary of the results. Results Results should be clear and concise. Discussion This should explore the significance of the results of the work, not repeat them. A combined Results and Discussion section is often appropriate. Avoid extensive citations and discussion of published literature. Appendices If there is more than one appendix, they should be identified as A, B, etc. Formulae and equations in appendices should be given separate numbering: Eq. (A.1), Eq. (A.2), etc.; in a subsequent appendix, Eq. (B.1) and so on. Similarly for tables and figures: Table A.1; Fig. A.1, etc. Vitae Submit a short (maximum 100 words) biography of each author. Please provide this in an editable format (e.g. Word), not in PDF format. Essential title page information • Title. Concise and informative. Titles are often used in information-retrieval systems. Avoid abbreviations and formulae where possible. • Author names and affiliations. Please clearly indicate the given name(s) and family name(s) of each author and check that all names are accurately spelled. You can add your name between parentheses in your own script behind the English transliteration. Present the authors' affiliation addresses (where the actual work was done) below the names. Indicate all affiliations with a lowercase superscript letter immediately after the author’s name and in front of the appropriate address. Provide the full postal address of each affiliation, including the country name and, if available, the e-mail address of each author. • Corresponding author. Clearly indicate who will handle correspondence at all stages of refereeing and publication, also post-publication. This responsibility includes answering any future queries about Methodology and Materials. Ensure that the e-mail address is given and that contact details are kept up to date by the corresponding author. • Present/permanent address. If an author has moved since the work described in the article was done, or was visiting at the time, a ‘Present address’ (or ‘Permanent address’) may be indicated as a footnote to that author's name. The address at which the author actually did the work must be retained as the main, affiliation address. Superscript Arabic numerals are used for such footnotes. Highlights Highlights are mandatory for this journal as they help increase the discoverability of your article via search engines. They consist of a short collection of bullet points that capture the novel results of your research as well as new methods that were used during the study (if any). Please have a look at the examples here: example Highlights. Highlights should be submitted in a separate editable file in the online submission system. Please use 'Highlights' in the file name and include 3 to 5 bullet points (maximum 85 characters, including spaces, per bullet point). Abstract A concise and factual abstract is required. The abstract should state briefly the purpose of the research, the principal results and major conclusions. An abstract is often presented separately from the article, so it must be able to stand alone. For this reason, References should be avoided, but if essential, then cite the author(s) and year(s). Also, non-standard or uncommon abbreviations should be avoided, but if essential they must be defined at their first mention in the abstract itself. Graphical abstract Although a graphical abstract is optional, its use is encouraged as it draws more attention to the online article. The graphical abstract should summarize the contents of the article in a concise, pictorial form designed to capture the attention of a wide readership. Graphical abstracts should be submitted as a separate file in the online submission system. Image size: Please provide an image with a minimum of 531 × 1328 pixels (h × w) or proportionally more. The image should be readable at a size of 5 × 13 cm using a regular screen resolution of 96 dpi. Preferred file types: TIFF, EPS, PDF or MS Office files. You can view Example Graphical Abstracts on our information site. Authors can make use of Elsevier's Illustration Services to ensure the best presentation of their images and in accordance with all technical requirements. Keywords Immediately after the abstract, provide a maximum of 6 keywords, using British spelling and avoiding general and plural terms and multiple concepts (avoid, for example, "and", "of"). Be sparing with abbreviations: only abbreviations firmly established in the field may be eligible. These keywords will be used for indexing purposes. To maximize the consistency with which such keywords are assigned by different authors, the following guidelines have been drawn up. • Each keyword (which can be a phrase of more than one word) should describe one single concept. Often words like "and" or "of" should be avoided. • Avoid very general keywords which become meaningless once in a keyword list. Examples to avoid are "action", "computer", "mathematics". Check whether the keywords as a whole describe the outlines of the article. • Use natural language: for instance "automatic error recovery" rather than "error recovery, automatic". • Try to use nouns and adjectives as much as possible (i.e. use "automatic error recovery" rather than "recovering errors automatically"). Do not use nouns in the plural form. • Use English rather than American spelling (regardless of the spelling used for the article itself). • Avoid the use of abbreviations as much as possible, unless an abbreviation is so well-established that the full term is rarely used (e.g. use "laser" instead of "Light Amplification by Stimulated Emission of Radiation", but use "computer aided design" instead of "CAD"). Abbreviations Define abbreviations that are not standard in this field in a footnote to be placed on the first page of the article. Such abbreviations that are unavoidable in the abstract must be defined at their first mention there, as well as in the footnote. Ensure consistency of abbreviations throughout the article. Acknowledgements Collate acknowledgements in a separate section at the end of the article before the references and do not, therefore, include them on the title page, as a footnote to the title or otherwise. List here those individuals who provided help during the research (e.g., providing language help, writing assistance or proof reading the article, etc.). Formatting of funding sources List funding sources in this standard way to facilitate compliance to funder's requirements: Funding: This work was supported by the National Institutes of Health [grant numbers xxxx, yyyy]; the Bill & Melinda Gates Foundation, Seattle, WA [grant number zzzz]; and the United States Institutes of Peace [grant number aaaa]. It is not necessary to include detailed descriptions on the program or type of grants and awards. When funding is from a block grant or other resources available to a university, college, or other research institution, submit the name of the institute or organization that provided the funding. If no funding has been provided for the research, please include the following sentence: This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. Math formulae Please submit math equations as editable text and not as images. Present simple formulae in line with normal text where possible and use the solidus (/) instead of a horizontal line for small fractional terms, e.g., X/Y. In principle, variables are to be presented in italics. Powers of e are often more conveniently denoted by exp. Number consecutively any equations that have to be displayed separately from the text (if referred to explicitly in the text). Footnotes Footnotes should be used sparingly. Number them consecutively throughout the article. Many word processors build footnotes into the text, and this feature may be used. Should this not be the case, indicate the position of footnotes in the text and present the footnotes themselves separately at the end of the article. Artwork Image manipulation Whilst it is accepted that authors sometimes need to manipulate images for clarity, manipulation for purposes of deception or fraud will be seen as scientific ethical abuse and will be dealt with accordingly. For graphical images, this journal is applying the following policy: no specific feature within an image may be enhanced, obscured, moved, removed, or introduced. Adjustments of brightness, contrast, or color balance are acceptable if and as long as they do not obscure or eliminate any information present in the original. Nonlinear adjustments (e.g. changes to gamma settings) must be disclosed in the figure legend. Electronic artwork General points • Make sure you use uniform lettering and sizing of your original artwork. • Preferred fonts: Arial (or Helvetica), Times New Roman (or Times), Symbol, Courier. • Number the illustrations according to their sequence in the text. • Use a logical naming convention for your artwork files. • Indicate per figure if it is a single, 1.5 or 2-column fitting image. • For Word submissions only, you may still provide figures and their captions, and tables within a single file at the revision stage. • Please note that individual figure files larger than 10 MB must be provided in separate source files. A detailed guide on electronic artwork is available. You are urged to visit this site; some excerpts from the detailed information are given here. Formats Regardless of the application used, when your electronic artwork is finalized, please 'save as' or convert the images to one of the following formats (note the resolution requirements for line drawings, halftones, and line/halftone combinations given below): EPS (or PDF): Vector drawings. Embed the font or save the text as 'graphics'. TIFF (or JPG): Color or grayscale photographs (halftones): always use a minimum of 300 dpi. TIFF (or JPG): Bitmapped line drawings: use a minimum of 1000 dpi. TIFF (or JPG): Combinations bitmapped line/half-tone (color or grayscale): a minimum of 500 dpi is required. Please do not: • Supply files that are optimized for screen use (e.g., GIF, BMP, PICT, WPG); the resolution is too low. • Supply files that are too low in resolution. • Submit graphics that are disproportionately large for the content. Color artwork Please make sure that artwork files are in an acceptable format (TIFF (or JPEG), EPS (or PDF), or MS Office files) and with the correct resolution. If, together with your accepted article, you submit usable color figures then Elsevier will ensure, at no additional charge, that these figures will appear in color online (e.g., ScienceDirect and other sites) regardless of whether or not these illustrations are reproduced in color in the printed version. **For color reproduction in print, you will receive information regarding the costs from Elsevier after receipt of your accepted article.** Please indicate your preference for color: in print or online only. **Further information on the preparation of electronic artwork.** Figure captions Ensure that each illustration has a caption. A caption should comprise a brief title (not on the figure itself) and a description of the illustration. Keep text in the illustrations themselves to a minimum but explain all symbols and abbreviations used. Tables Please submit tables as editable text and not as images. Tables can be placed either next to the relevant text in the article, or on separate page(s) at the end. Number tables consecutively in accordance with their appearance in the text and place any table notes below the table body. Be sparing in the use of tables and ensure that the data presented in them do not duplicate results described elsewhere in the article. Please avoid using vertical rules and shading in table cells. References Citation in text Please ensure that every reference cited in the text is also present in the reference list (and vice versa). Any references cited in the abstract must be given in full. Unpublished results and personal communications are not recommended in the reference list, but may be mentioned in the text. If these references are included in the reference list they should follow the standard reference style of the journal and should include a substitution of the publication date with either 'Unpublished results' or 'Personal communication'. Citation of a reference as 'in press' implies that the item has been accepted for publication. Reference links Increased discoverability of research and high quality peer review are ensured by online links to the sources cited. In order to allow us to create links to abstracting and indexing services, such as Scopus, CrossRef and PubMed, please ensure that data provided in the references are correct. Please note that incorrect surnames, journal/book titles, publication year and pagination may prevent link creation. When copying references, please be careful as they may already contain errors. Use of the DOI is highly encouraged. A DOI is guaranteed never to change, so you can use it as a permanent link to any electronic article. An example of a citation using DOI for an article not yet in an issue is: VanDecar J.C., Russo R.M., James D.E., Ambeh W.B., Franke M. (2003). Aseismic continuation of the Lesser Antilles slab beneath northeastern Venezuela. Journal of Geophysical Research, https://doi.org/10.1029/2001JB000884. Please note the format of such citations should be in the same style as all other references in the paper. Web references As a minimum, the full URL should be given and the date when the reference was last accessed. Any further information, if known (DOI, author names, dates, reference to a source publication, etc.), should also be given. Web references can be listed separately (e.g., after the reference list) under a different heading if desired, or can be included in the reference list. Data references This journal encourages you to cite underlying or relevant datasets in your manuscript by citing them in your text and including a data reference in your Reference List. Data references should include the following elements: author name(s), dataset title, data repository, version (where available), year, and global persistent identifier. Add [dataset] immediately before the reference so we can properly identify it as a data reference. The [dataset] identifier will not appear in your published article. References in a special issue Please ensure that the words 'this issue' are added to any references in the list (and any citations in the text) to other articles in the same Special Issue. Reference management software Most Elsevier journals have their reference template available in many of the most popular reference management software products. These include all products that support Citation Style Language styles, such as Mendeley. Using citation plug-ins from these products, authors only need to select the appropriate journal template when preparing their article, after which citations and bibliographies will be automatically formatted in the journal's style. If no template is yet available for this journal, please follow the format of the sample references and citations as shown in this Guide. If you use reference management software, please ensure that you remove all field codes before submitting the electronic manuscript. More information on how to remove field codes from different reference management software. Mendeley Data is a free-to-use open research data repository, which allows you to share the data associated to your article. To make your data available, please create a dataset at Mendeley Data, and publish it (under embargo if you wish). Your dataset will receive a DOI. Please cite your dataset in the References section. Once your article is accepted, we will place links between your article and the dataset, making your data easily accessible with one click for your article readers. Here is the link to create a dataset: [https://data.mendeley.com/datasets/create](https://data.mendeley.com/datasets/create) Reference formatting There are no strict requirements on reference formatting at submission. References can be in any style or format as long as the style is consistent. Where applicable, author(s) name(s), journal title/book title, chapter title/article title, year of publication, volume number/book chapter and the article number or pagination must be present. Use of DOI is highly encouraged. The reference style used by the journal will be applied to the accepted article by Elsevier at the proof stage. Note that missing data will be highlighted at proof stage for the author to correct. If you do wish to format the references yourself they should be arranged according to the following examples: Reference style Text: All citations in the text should refer to: 1. Single author: the author's name (without initials, unless there is ambiguity) and the year of publication; 2. Two authors: both authors' names and the year of publication; 3. Three or more authors: first author's name followed by 'et al.' and the year of publication. Citations may be made directly (or parenthetically). Groups of references can be listed either first alphabetically, then chronologically, or vice versa. Examples: 'as demonstrated (Allan, 2000a, 2000b, 1999; Allan and Jones, 1999).... Or, as demonstrated (Jones, 1999; Allan, 2000).... Kramer et al. (2010) have recently shown ...' List: References should be arranged first alphabetically and then further sorted chronologically if necessary. More than one reference from the same author(s) in the same year must be identified by the letters 'a', 'b', 'c', etc., placed after the year of publication. Examples: Reference to a journal publication: Reference to a journal publication with an article number: Reference to a book: Reference to a chapter in an edited book: Reference to a website: Reference to a dataset: Journal abbreviations source Journal names should be abbreviated according to the List of Title Word Abbreviations. Video Elsevier accepts video material and animation sequences to support and enhance your scientific research. Authors who have video or animation files that they wish to submit with their article are strongly encouraged to include links to these within the body of the article. This can be done in the same way as a figure or table by referring to the video or animation content and noting in the body text where it should be placed. All submitted files should be properly labeled so that they directly relate to the video file's content. In order to ensure that your video or animation material is directly usable, please provide the file in one of our recommended file formats with a preferred maximum size of 150 MB per file, 1 GB in total. Video and animation files supplied will be published online in the electronic version of your article in Elsevier Web products, including ScienceDirect. Please supply 'stills' with your files: you can choose any frame from the video or animation or make a separate image. These will be used instead of standard icons and will personalize the link to your video data. For more detailed instructions please visit our video instruction pages. Note: since video and animation cannot be embedded in the print version of the journal, please provide text for both the electronic and the print version for the portions of the article that refer to this content. Data visualization Include interactive data visualizations in your publication and let your readers interact and engage more closely with your research. Follow the instructions here to find out about available data visualization options and how to include them with your article. Supplementary material Supplementary material such as applications, images and sound clips, can be published with your article to enhance it. Submitted supplementary items are published exactly as they are received (Excel or PowerPoint files will appear as such online). Please submit your material together with the article and supply a concise, descriptive caption for each supplementary file. If you wish to make changes to supplementary material during any stage of the process, please make sure to provide an updated file. Do not annotate any corrections on a previous version. Please switch off the 'Track Changes' option in Microsoft Office files as these will appear in the published version. Research data This journal encourages and enables you to share data that supports your research publication where appropriate, and enables you to interlink the data with your published articles. Research data refers to the results of observations or experimentation that validate research findings. To facilitate reproducibility and data reuse, this journal also encourages you to share your software, code, models, algorithms, protocols, methods and other useful materials related to the project. Below are a number of ways in which you can associate data with your article or make a statement about the availability of your data when submitting your manuscript. If you are sharing data in one of these ways, you are encouraged to cite the data in your manuscript and reference list. Please refer to the "References" section for more information about data citation. For more information on depositing, sharing and using research data and other relevant research materials, visit the research data page. Data linking If you have made your research data available in a data repository, you can link your article directly to the dataset. Elsevier collaborates with a number of repositories to link articles on ScienceDirect with relevant repositories, giving readers access to underlying data that gives them a better understanding of the research described. There are different ways to link your datasets to your article. When available, you can directly link your dataset to your article by providing the relevant information in the submission system. For more information, visit the database linking page. For supported data repositories a repository banner will automatically appear next to your published article on ScienceDirect. In addition, you can link to relevant data or entities through identifiers within the text of your manuscript, using the following format: Database: xxxx (e.g., TAIR: AT1G01020; CCDC: 734053; PDB: 1XFN). **Mendeley Data** This journal supports Mendeley Data, enabling you to deposit any research data (including raw and processed data, video, code, software, algorithms, protocols, and methods) associated with your manuscript in a free-to-use, open access repository. During the submission process, after uploading your manuscript, you will have the opportunity to upload your relevant datasets directly to Mendeley Data. The datasets will be listed and directly accessible to readers next to your published article online. For more information, visit the Mendeley Data for journals page. **Data in Brief** You have the option of converting any or all parts of your supplementary or additional raw data into one or more data articles, a new kind of article that houses and describes your data. Data articles ensure that your data is actively reviewed, curated, formatted, indexed, given a DOI and publicly available to all upon publication. You are encouraged to submit your article for Data in Brief as an additional item directly alongside the revised version of your manuscript. If your research article is accepted, your data article will automatically be transferred over to Data in Brief where it will be editorially reviewed and published in the open access data journal, Data in Brief. Please note an open access fee of 600 USD is payable for publication in Data in Brief. Full details can be found on the Data in Brief website. Please use this template to write your Data in Brief. **MethodsX** You have the option of converting relevant protocols and methods into one or multiple MethodsX articles, a new kind of article that describes the details of customized research methods. Many researchers spend a significant amount of time on developing methods to fit their specific needs or setting, but often without getting credit for this part of their work. MethodsX, an open access journal, now publishes this information in order to make it searchable, peer reviewed, citable and reproducible. Authors are encouraged to submit their MethodsX article as an additional item directly alongside the revised version of their manuscript. If your research article is accepted, your methods article will automatically be transferred over to MethodsX where it will be editorially reviewed. Please note an open access fee is payable for publication in MethodsX. Full details can be found on the MethodsX website. Please use this template to prepare your MethodsX article. **Data statement** To foster transparency, we encourage you to state the availability of your data in your submission. This may be a requirement of your funding body or institution. If your data is unavailable to access or unsuitable to post, you will have the opportunity to indicate why during the submission process, for example by stating that the research data is confidential. The statement will appear with your published article on ScienceDirect. For more information, visit the Data Statement page. **AFTER ACCEPTANCE** **Online proof correction** To ensure a fast publication process of the article, we kindly ask authors to provide us with their proof corrections within two days. Corresponding authors will receive an e-mail with a link to our online proofing system, allowing annotation and correction of proofs online. The environment is similar to MS Word: in addition to editing text, you can also comment on figures/tables and answer questions from the Copy Editor. Web-based proofing provides a faster and less error-prone process by allowing you to directly type your corrections, eliminating the potential introduction of errors. If preferred, you can still choose to annotate and upload your edits on the PDF version. All instructions for proofing will be given in the e-mail we send to authors, including alternative methods to the online version and PDF. We will do everything possible to get your article published quickly and accurately. Please use this proof only for checking the typesetting, editing, completeness and correctness of the text, tables and figures. Significant changes to the article as accepted for publication will only be considered at this stage with permission from the Editor. It is important to ensure that all corrections are sent back to us in one communication. Please check carefully before replying, as inclusion of any subsequent corrections cannot be guaranteed. Proofreading is solely your responsibility. **Offprints** The corresponding author will, at no cost, receive a customized Share Link providing 50 days free access to the final published version of the article on ScienceDirect. The Share Link can be used for sharing the article via any communication channel, including email and social media. For an extra charge, paper offprints can be ordered via the offprint order form which is sent once the article is accepted for publication. Both corresponding and co-authors may order offprints at any time via Elsevier’s Author Services. Corresponding authors who have published their article gold open access do not receive a Share Link as their final published version of the article is available open access on ScienceDirect and can be shared through the article DOI link. **AUTHOR INQUIRIES** Visit the Elsevier Support Center to find the answers you need. Here you will find everything from Frequently Asked Questions to ways to get in touch. You can also check the status of your submitted article or find out when your accepted article will be published.
{"Source-Url": "https://www.elsevier.com/journals/journal-of-systems-and-software/0164-1212?generatepdf=true", "len_cl100k_base": 11576, "olmocr-version": "0.1.49", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 37748, "total-output-tokens": 12798, "length": "2e13", "weborganizer": {"__label__adult": 0.0006537437438964844, "__label__art_design": 0.0021343231201171875, "__label__crime_law": 0.0007524490356445312, "__label__education_jobs": 0.1259765625, "__label__entertainment": 0.0003786087036132813, "__label__fashion_beauty": 0.0005927085876464844, "__label__finance_business": 0.005046844482421875, "__label__food_dining": 0.0007305145263671875, "__label__games": 0.0017490386962890625, "__label__hardware": 0.0014209747314453125, "__label__health": 0.0022106170654296875, "__label__history": 0.0011568069458007812, "__label__home_hobbies": 0.0004839897155761719, "__label__industrial": 0.001300811767578125, "__label__literature": 0.0024623870849609375, "__label__politics": 0.0005011558532714844, "__label__religion": 0.0009407997131347656, "__label__science_tech": 0.317138671875, "__label__social_life": 0.0005655288696289062, "__label__software": 0.027069091796875, "__label__software_dev": 0.5048828125, "__label__sports_fitness": 0.000553131103515625, "__label__transportation": 0.00084686279296875, "__label__travel": 0.0004279613494873047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59598, 0.00354]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59598, 0.17662]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59598, 0.89244]], "google_gemma-3-12b-it_contains_pii": [[0, 3010, false], [3010, 5870, null], [5870, 8471, null], [8471, 12975, null], [12975, 16249, null], [16249, 20778, null], [20778, 24851, null], [24851, 29239, null], [29239, 32919, null], [32919, 37053, null], [37053, 40789, null], [40789, 45135, null], [45135, 49445, null], [49445, 53893, null], [53893, 58537, null], [58537, 59598, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3010, true], [3010, 5870, null], [5870, 8471, null], [8471, 12975, null], [12975, 16249, null], [16249, 20778, null], [20778, 24851, null], [24851, 29239, null], [29239, 32919, null], [32919, 37053, null], [37053, 40789, null], [40789, 45135, null], [45135, 49445, null], [49445, 53893, null], [53893, 58537, null], [58537, 59598, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59598, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59598, null]], "pdf_page_numbers": [[0, 3010, 1], [3010, 5870, 2], [5870, 8471, 3], [8471, 12975, 4], [12975, 16249, 5], [16249, 20778, 6], [20778, 24851, 7], [24851, 29239, 8], [29239, 32919, 9], [32919, 37053, 10], [37053, 40789, 11], [40789, 45135, 12], [45135, 49445, 13], [49445, 53893, 14], [53893, 58537, 15], [58537, 59598, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59598, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
cd39e5e010194e44634104beea3b8635b2c196c7
Wenhao Zhu CAPTCHA Recognition System Information Technology 2019 FOREWARD I have been studying in Vaasa Ammattikorkeakoulu (Vaasa University of Applied Sciences) for about 3 years since 2016. All of these great experiences came from a unique exchange opportunity provided by the Department of International Relations and HMT Affairs Office, Hubei University of Technology. It is a memorable experience during my whole life and I would like to express my appreciation to everyone who helped me. First of all, I would like to show my great appreciation to my supervisor, Dr. Yang Liu, who provides a great platform and provides many opportunities for me and many IT students. I had a big delay in my project and thesis process, but Dr. Yang Liu is very tolerant and offered me the chance to finish my thesis. Without him, it would be hard for me to finish it. Also, I would like to thank all the staffs in VAMK, especially lecturers Dr. Chao Gao, Dr. Ghodrat Moghadampour, Mr. Santiago Chavez Vega, and Dr. Seppo Makinen. They offer high-quality lectures and have high professionalism, which helps me understand deeply in the fields where they have research for many years. Also, I would like to express my great gratitude to my parents who always encourage me to do whatever I decided to and support me financially and emotionally. And I would thank all my friends who study and work with me in Technobothnia RoboCup Laboratory, I wish you very good luck in the future. Wenhao Zhu Vaasa, Finland 19.05.2019 ABSTRACT Author Wenhao Zhu Title CAPTCHA Recognition System Year 2019 Language English Pages 59 Name of Supervisor Yang Liu Completely Automated Public Turing test to tell Computers and Humans Apart (CAPTCHA), is a public fully automatic program that distinguishes users from computers or people. This thesis developed a CAPTCHA recognition system that can be deployed on the NAO robot, a humanoid robot to pass the Turing test and also can be deployed on web services in order to provide a recognition service. The recognition system uses convolutional neural network to extract features in CAPTCHA image and encode data with one-hot encoding system which is widely used in multiclassification. Python is the programming language used in developing this project, TensorFlow and Keras library are used to easily establish a neural network. NAO robot version is v5 and code testing is on Ubuntu 16.04 release. The final recognition model showed about 99.67% accuracy on train dataset and 98.10% accuracy on test dataset with suitable optimizer and loss function. According to the one-hot encoding features when regulated data, the accuracy is a bit high than it performed in real applications. Due to a large amount of CAPTCHA data for the combination of numbers and letters, the CAPTCHA in this thesis dataset consists only of numbers, which could be improved by using datasets contains numbers and letters CAPTCHA. Keywords: NAO Robot, CAPTCHA, Convolutional Neural Network CONTENTS 1 INTRODUCTION .................................................................................................................. 8 1.1 Background ................................................................................................................ 8 1.2 Purpose ....................................................................................................................... 8 1.3 Overall Structure ...................................................................................................... 9 2 TECHNOLOGIES AND DEVELOPMENT TOOLS ...................................................... 10 2.1 Technologies ............................................................................................................... 10 2.1.1 Artificial Intelligence ............................................................................................. 10 2.1.2 Perceptron Model ............................................................................................... 11 2.1.3 Deep Learning .................................................................................................... 12 2.1.4 Convolutional Neural Network ............................................................................ 13 2.2 Programming Tools .................................................................................................. 17 2.2.1 Python ................................................................................................................ 17 2.2.2 OpenCV ............................................................................................................. 17 2.2.3 NumPy ............................................................................................................... 18 2.2.4 Matplotlib .......................................................................................................... 18 2.2.5 Captcha .............................................................................................................. 18 2.2.6 TensorFlow ........................................................................................................ 18 2.2.7 Keras .................................................................................................................. 19 2.2.8 Google Colaboratory .......................................................................................... 20 2.2.9 Google Cloud Speech-to-Text ............................................................................ 20 2.2.10 Flask ................................................................................................................. 21 2.3 NAO Robot .................................................................................................................. 21 2.3.1 NAOqi ................................................................................................................ 24 2.3.2 ALAudioRecorder .............................................................................................. 25 2.3.3 ALTextToSpeech ............................................................................................... 25 3 OVERALL DESIGN ............................................................................................................. 26 4 DATA PREPROCESSING ..................................................................................................... 28 4.1 Create Dataset .......................................................................................................... 28 4.2 Process Input Data .................................................................................................... 30 4.2.1 RGB Format to Grayscale ................................................................................ 30 4.2.2 Data Normalization ............................................................................................ 31 LIST OF FIGURES AND TABLES Figure 1. Development of Artificial Intelligence [1] 11 Figure 2. Schematic of a biological neuron [2] 12 Figure 3. Perception Model [2] 12 Figure 4. MLP with some hidden layers [3] 13 Figure 5. CNN Concept Diagram [4] 14 Figure 6. Convolution Layer [5] 15 Figure 7. Max-Pooling [4] 16 Figure 8. Flattening Process [6] 16 Figure 9. Full Connected Layer Structure [7] 17 Figure 10. Dataflow graph in TensorFlow [4] 19 Figure 11. Training model on Google Colaboratory with GPU acceleration. 20 Figure 12. Google Cloud Speech-to-Text 21 Figure 13. NAO robot structure (Unit: mm) [6] 22 Figure 14. NAO Video camera [7] 23 Figure 15. NAO Video camera [7] 23 Figure 16. NAO robot Microphones [9] 24 Figure 17. NAO robot Loudspeakers [10] 24 Figure 18. Data Preparation flowchart 26 Figure 19. Training Process Flowchart 26 Figure 20. Deployment Flowchart 27 Figure 21. CAPTCHA example generated by Captcha library 28 Figure 22. CAPTCHA charset, dataset size, and data directory 28 Figure 23. CAPTCHA generating methods 29 Figure 24. CAPTCHA raw images with labels 30 Figure 25. load training and testing dataset 31 Figure 26. Convert image to numpy array 31 Figure 27. RGB to grayscale method 31 Figure 28. Grayscale CAPTCHA image 31 Figure 29. Data Normalization process 32 Figure 30. Before normalization 32 Figure 31. After normalization 32 Figure 32. Method of fit Keras channels Figure 33. Example of one-hot encoding [11] Figure 34. Method of transmitting label text to one-hot vector Figure 35. "9513" encoded by one-hot encoding Figure 36. Method of transmitting vector to text Figure 37. Example of vector to text Figure 38. AlexNet Structure [12] Figure 39. VGG-16 model [13] Figure 40. Flow chart of the CNN Figure 41. CNN model in this project Figure 42. The whole structure of the CNN Figure 43. Flow chart of the training process Figure 44. Model and history files Figure 45. Training process Figure 46. MSE graph [15] Figure 47. Gradient Descent [17] Figure 48. With momentum & without momentum [30] Figure 49. adam & binary cross entropy Figure 50. adam & Poisson Figure 51. Model Accuracy comparison by different loss functions in train dataset Figure 52. Model Loss comparison by different loss functions in train dataset Figure 53. Model Accuracy comparison by different loss functions in the test dataset Figure 54. Model Loss comparison by different loss functions in the test dataset Figure 55. Model Accuracy comparison by different optimizers in train dataset Figure 56. Model Loss comparison by different optimizers in train dataset Figure 57. Model Accuracy comparison by different optimizers in the test dataset Figure 58. Model Loss comparison by different optimizers in the test dataset Figure 59. Recognize CAPTCHA image Figure 60. Robot recognizing CAPTCHA images Figure 61. Deploy CAPTCHA recognition service to a web server Figure 62. Call a CAPTCHA recognition service through CURL command 1 INTRODUCTION 1.1 Background As artificial intelligence and deep learning have been gaining more and more attention since 2010, people began again to talk about where the boundaries between human beings and machines are. CAPTCHA, an acronym for “Completely Automated Public Turing test to tell Computers and Humans Apart”, is a fully automatic program that distinguishes users from computer or people. A commonly used CAPTCHA test is to let the user input the text or number displayed on a distorted picture if user types the correct text inside of CAPTCHA, he or she are believed a human (distortion is to avoid being recognized by a computer program such as Optical Character Recognition (OCR)), otherwise, the program will generate a new CAPTCHA to do the test again within limited chances. If the input text and CAPTCHA text do not match more than a certain number of times, then it will be considered a robot and the previous work will not be recognized. Recognizing CAPTCHA is consider a hard work in the past, however, thanks to the deep learning algorithm, it can be possible in today’s technology. 1.2 Purpose The goal of this thesis is to build a CAPTCHA recognition system, which could recognize what texts contained in the CAPTCHA image, with an over 50% accuracy and universal applicability which could be deployed on NAO robot in a real situation. To reach the goal, a series of researches and works are done to achieve the purpose. First, two datasets should be prepared for training and testing the model. Secondly, CAPTCHA images are regulated and processed in order to fit the training format and train the model more quickly. After these pre-processes for data, a convolutional neural network would be established to train the model with multiple optimizers and loss functions. Then we analyze the result, choose the model performed best as our final model. Finally, we connect to the NAO robot and deploy our recognition system on it. We also provide a recognition interface for web services that can be invoked by certain command. 1.3 Overall Structure This thesis consists of ten chapters. The first chapter is the introduction part, which includes the relevant background knowledge, the purpose and the structure of this thesis. Chapter two mainly introduces the technologies and development tools used in this project, including programming language, neural network library and instruction about NAO robot. In Chapter three, overall design of the whole project is given, with flow chart and documental explanations. Chapter four illustrates how data is been processed before training in the convolutional neural network by some methods, in order to suit the data format in the Keras library. Chapter five and six are the model designing and training parts, during these two chapters, how CNN training structure is shown by several figures which can clearly explain the CNN structure. Besides, how our model is trained is detailed described. After our model has been trained, an analysis of the result is given by chapter seven, includes the choice of optimizers and loss functions by analyzing the output results. Chapter eight is about deployment, in this chapter the suitable trained model is deployed on the NAO robot to establish a CAPTCHA recognition system in a real situation. Besides, a CAPTCHA-recognize web service is also provided through the Flask web framework in this chapter. The final chapter is the conclusion part, in which some recommendation improvements for future research are illustrated as well as the limitations. 2 TECHNOLOGIES AND DEVELOPMENT TOOLS 2.1 Technologies 2.1.1 Artificial Intelligence Artificial Intelligence (AI) has gradually appeared in many news reports recently. However, it started far early since the 1950s. AI has 3 development periods. From 1956, when the concept of AI was first presented at the Dartmouth meeting, to 1980s is the first development period, scientists were starting to do some researches in this field, mainly focus on voice recognition, cryptography, and expert system. IBM Deep Blue was invented in this period and beat a world champion chess player. After that, machine learning and simple perceptron model was introduced to this field which brought the second big development. Advertisements Blocker is a typical application based on a machine learning method. Time comes to the twenty-first century, people realized that deep learning has a good application when the amount of data and computing capability are sufficient. In 2010, Dr. Fei-fei Li established ImageNet with many other researchers brought AI to the third development period. 2.1.2 Perceptron Model In 1957, psychologist Frank Rosenblatt defined the concept of the Perceptron algorithm based on Warren and Walters’ work. They classify biological neurons as simple logic gates with binary outputs. In a more intuitive way, a neuron can be understood as a child node of a neural network in a biological brain. Here, the variable signal, considered as the input signal, reaches the dendrites. When the intensity of the input signal exceeds a certain threshold, an output signal is generated and transmitted by the dendrites. [2] ![Schematic of a biological neuron](image) **Figure 2.** Schematic of a biological neuron [2] ![Perception Model](image) **Figure 3.** Perception Model [2] The purpose of the perceptron algorithm is to learn a weight vector $w$ for a sample set of multidimensional features, such that after multiplying $w$ by the input feature vector $x$, based on the result, it can be determined whether a neuron is activated. Perceptron model is a model of two class classification. ### 2.1.3 Deep Learning The concept of deep learning stems from the research of artificial neural networks. Artificial neural network is a computational model that inspires biological neural networks that process information from the human brain. The artificial neural network has made a series of breakthroughs in the fields of speech recognition, computer vision, and text processing. Multilayer Perceptron (MLP) is a specific artificial neural network, and it is also called Artificial Neural Network (ANN). In addition to the input and output layer, it can have multiple hidden layers in the middle, the simplest MLP contains only one hidden layer, that is, the structure of the three layers. ![MLP with some hidden layers](image) **Figure 4.** MLP with some hidden layers [3] ### 2.1.4 Convolutional Neural Network Convolutional Neural Network is an application of deep learning algorithms in the field of image processing, shown below. Convolution neural network is an efficient identification method which has been developed in recent years and has attracted wide attention. In the 1960s, Hubel and Wiesel in studying the neurons used for local sensitivity and directional selection in the cat cortex, they found that their unique network structure can effectively reduce the complexity of the feedback neural network, and then put forward the convolution neural network (Convolutional Neural Networks is referred to as CNN). The new identification machine proposed by K. Fukushima in 1980, which is the first implementation network of convolution neural network. Subsequently, more researchers made improvements to the network. Among them, the representative research results are the "improved cognitive machine" proposed by Alexander and Taylor, which synthesizes the advantages of various improvement methods and avoids the time-consuming error reverse propagation. [3] ![CNN Concept Diagram](image) **Figure 5.** CNN Concept Diagram [4] There are several layers that contain in the CNN: 1. Convolution Layer: The convolution layer can simulate the nature of the local sensory field, which is not fully connected to the previous layer, but is a small area connection. This small piece is the local receptive field. By constructing specific convolution neurons, the artificial neural network can simulate the properties of different neurons that stimulate different reactions to different shapes. As shown in Figure 6, a neuron forms a feature map by processing layer, and then multiple layers will superimpose, and the number of layers gradually will accumulate. 2. Pooling layer: The memory consumption is huge when the picture size is large, and the role of pooling layer is to condense the effect and ease the memory pressure, which means selecting a certain size area and representing the area with a representative element. There are two specific Pooling, averaging and taking the maximum value, and the common type of pooling layer is Max-Pooling, shown in Figure 7. The main benefit of Max-Pooling is that if the picture is panned a few Pixels, the judgment of the result will not have an impact at all, and Max-Pooling has a good anti-noise function. Figure 6. Convolution Layer [5] 3. Flatten: Convolution layer can’t connect with Dense fully connected layer directly. The data of convolution layer needs to be flattened, and then can be directly added to the Dense layer, so the use of flatten is to compress the height, width and channel data from convolution layer in 2-dimension into a 1-dimensional array which the length is height x width x channel, in order to connect with fully connected layer successfully, shown in Figure 8. 4. Fully connected layer: The fully connected layer (FC) acts as a “classifier” throughout the convolution neural network, shown in Figure 9. If the operation of convolution layer, pool layer, and activation function layer are to map the original data to the hidden layer feature space, the fully connected layer plays the role of mapping the "distributed feature representation" to the sample mark-up space. In practical use, the fully connected layer can be realized by convolution operation. ![Figure 9. Full Connected Layer Structure [7]](image) ### 2.2 Programming Tools #### 2.2.1 Python Python is a simple, interpreted, interactive, high-level programming language. Clear and elegant syntax make it widely acclaimed. It has most of the features of an object-oriented language for full object-oriented programming. Python is portable and is cross-platform for a variety of operating systems includes Windows, MacOS, and Linux. These kinds of features above make it very popular internationally and are gaining more and more applications. Python version 2.7 is used in this project and the IDE is PyCharm. [3] #### 2.2.2 OpenCV OpenCV (Open source Computer Vision) is an open source library that widely used in the field of computer vision. The library is written in C and C ++ and can be run on Windows, Linux, Mac OS systems. OpenCV supports interfaces in many programming languages, the interface of Python is used in this thesis and the version is 4.1.0. All the library's code is optimized and computationally efficient because it is more focused on design as an open source library for real-time systems. OpenCV uses C language to opti- mize, and, in the multi-core machine above, it will One of its goals is to provide a friendly machine vision interface function that enables complex machine vision products to accel- erate. The library contains over 500 interface functions spanning areas such as industrial product testing, medical image processing, security, user interface, camera Calibration, 3D imaging, machine vision and more. 2.2.3 NumPy NumPy is a powerful Python library for performing computations on multidimensional arrays. The word NumPy comes from two words - Numerical and Python. NumPy pro- vides a large number of library functions and operations that help programmers easily perform numerical calculations. This type of numerical calculation is widely used in the machine learning model, Image processing and computer graphics and math tasks. [10] 2.2.4 Matplotlib Matplotlib is a Python 2D plotting library which produces publication quality figures in a variety of hardcopy formats and interactive environments across platforms. Matplotlib can be used in Python scripts, the Python and IPython shells, the Jupyter notebook, web application servers, and four graphical user interface toolkits. [11] 2.2.5 Captcha Captcha library is a python library that enables users to generate audio and image CAP- TCHAs. Captcha library is easy to install and can be used in a simple way. 2.2.6 TensorFlow TensorFlow is an open source software library for machine learning for a variety of per- ception and language understanding tasks. In 2012, Google released its first generation of large-scale distributed deep learning framework - Google Distbelief, which is widely used in various applications of Google, such as Google Translate and YouTube. In 2015, Google has open sourced the second-generation of large-scale distributed deep learning platform – TensorFlow. Nowadays many companies are using this technology, including Airbnb, Uber and Intel. TensorFlow uses a dataflow graph to represent computation in terms of the dependencies between individual operations. Figure 10. Dataflow graph in TensorFlow [4] 2.2.7 Keras Keras is a high-level neural networks API, written in Python and capable of running on top of TensorFlow, CNTK, or Theano. It was developed with a focus on enabling fast experimentation. Being able to go from idea to result with the least possible Delay is key to doing good research. [13] 2.2.8 Google Colaboratory Google Colaboratory, also known as Colab, is a free Jupyter notebook environment (a web-based interactive computing environment, for creating Jupyter Notebook documents) that requires no setup and runs entirely in the cloud. With Colaboratory you can write and execute code, save and share your analyses, and access powerful computing resources, all for free from your browser. Colaboratory was originally part of the Jupyter project but was eventually taken over by Google. Colab supports free GPU training acceleration. [14] ![Figure 11. Training model on Google Colaboratory with GPU acceleration.](image) 2.2.9 Google Cloud Speech-to-Text Google Cloud Speech-to-Text enables developers to convert audio to text by applying powerful neural network models in an easy-to-use API. The API recognizes 120 languages and variants to support your global user base. You can enable voice command-and-control, transcribe audio from call centers, and more. It can process real-time streaming or prerecorded audio, using Google’s machine learning technology. [14] 2.2.10 Flask Flask is a lightweight web application framework written in Python, based on the Werkzeug WSGI toolkit and the Jinja2 template engine. Flask is compliant with the BSD licenses. It implements the core of the framework in a minimalist way while retaining scalability. Flask has a default development server and is easy to debug. If the application mode is set to debug, whenever files change in this application, the server would detect them and reload automatically to get the application updated. At the same time, Flask provides us with a helpful debugger on the occurrence of exceptions. [5] 2.3 NAO Robot NAO robot is a humanoid robot developed by the Aldebaran Robotics in France (acquired by SoftBank Group in 2015 and rebranded as SoftBank Robotics) and is widely used for research and education purpose in numerous academic institutions. NAO can be used as a research robot in schools, universities and universities. It is responsible for teaching programming and developing human-machine interaction. NAO robot has varieties of sensors which allow the robot can move like human beings. NAO robot has dual cameras, which can support it be sued to fully identify and position objects at 30 frames per second and the resolution is 1280 x 960 pixels. It contains 4 microphones which are located on the front, behind, left and right respectively, and all of them are 300 Hz to 8000 Hz. The robot also has a stereo system which consists of two speakers and an LED light. The robot has two CPUs, which are both Intel series. One is located on the head of the robot, the kernel is run in the language of Linux, and supports the Aldebaran company's own development of the NAOqi framework. **Figure 13.** NAO robot structure (Unit: mm) [6] Figure 14. NAO Video camera [7] Figure 15. NAO Video camera [7] 2.3.1 NAOqi NAOqi framework is the programming framework developed by Aldebaran, used to program NAO. NAOqi framework provides varieties of programming SDK including C++ and Python. In NAOqi framework, the robot communicates with the external world must through many proxies. In this thesis, ALAudioRecorder proxy and ALTextToSpeech proxy are used in order to make conversations between robots and users. 2.3.2 ALAudioRecorder ALAudioRecorder provides recording services in “WAV” and “OGG” file format of the signals coming from the robot’s microphones. ALAudioRecorder relies on the Linux library SNDFile to efficiently encode audio inputs in real time. ALAudioRecorder collects input signals through ALAudioDevice. ALAudioDevice provides other NAOqi modules with access to NAO’s audio inputs (microphones) and outputs (loudspeakers). It is based on the standard Linux ALSA (Advanced Linux Sound Library) library to communicate with NAO’s sound driver and subsequently to microphones and speakers. To process data coming from the microphones, the procedure is different. Indeed, a NAOqi module willing to process such data will first “subscribe” to ALAudioDevice and specify the format of the data that it requires (number of channels, sample rate, etc...). The data correctly formatted will then be automatically and regularly sent to the requesting module by using one of its methods as a callback. The recording capabilities are for now limited to the following formats: 1. four channels 48000Hz in OGG. 2. four channels 48000Hz in WAV. 3. one channel (front, rear, left or right), 16000Hz, in OGG. 4. one channel (front, rear, left or right), 16000Hz, in WAV. [8] 2.3.3 ALTextToSpeech The ALTextToSpeech module allows the robot to speak. It sends commands to a text-to-speech engine and authorizes also voice customization. The result of the synthesis is sent to the robot’s loudspeakers. [22] 3 OVERALL DESIGN The whole structure of the system is shown below. First of all, by using Captcha library, a training dataset and a testing dataset are created and containing thousands of CAPTCHA images with certain labels as their file names. Secondly, those images data are loaded to the memory and are processed to fit the training format. After the image data and labels are prepared, a convolutional neural network is going to be established with dataset the input. Then we train our model in several rounds to enhance the accuracy and also select the best optimizer and loss functions in this model. When the model file is trained with a great performance in the test dataset, we then deploy it to NAO robot and a web service to test it recognizing capability in the real situation. **Figure 18. Data Preparation flowchart** **Figure 19. Training Process Flowchart** Figure 20. Deployment Flowchart 4 DATA PREPROCESSING 4.1 Create Dataset CAPTCHA creation is very related to the generators. Different generators can create very different CAPTCHA images. Captcha library is used to create CAPTCHA images in this thesis. ![CAPTCHA example generated by Captcha library](image) Figure 21. CAPTCHA example generated by Captcha library We first create a charset which contains the characters we would like to insert in the CAPTCHA image. In this case, the number [0, 1, 2, ..., 9] are used as the charset. ```python from captcha.image import ImageCaptcha import random import numpy as np import tensorflow.gfile as gfile import matplotlib.pyplot as plt import PIL.Image as Image NUMBER = ['0', '1', '2', '3', '4', '5', '6', '7', '8', '9'] CAPTCHA_CHARSET = NUMBER # Captcha charset print(CAPTCHA_CHARSET) CAPTCHA_LEN = 4 # Captcha length CAPTCHA_HEIGHT = 60 # Captcha height CAPTCHA_WIDTH = 160 # Captcha width CAPTCHA_CLASS = len(CAPTCHA_CHARSET) TRAIN_DATASET_SIZE = 10000 # Captcha dataset size (train data) TEST_DATASET_SIZE = 2000 # Captcha dataset size (test data) TRAIN_DATA_DIR = './train_data/' # Captcha train data directory TEST_DATA_DIR = './test_data/' # Captcha test data directory ``` Figure 22. CAPTCHA charset, dataset size, and data directory The CAPTCHA image is set to 160 pixels width and 60 pixels height. Each image contains 4 characters. As training dataset needs a large amount of data to construct the recognition model, 10000 is set to the size of the training dataset and the size of the test dataset is set to 2000. However, as 0 to 9 has 10000 possible combinations within 4 characters, which is the same size of our training dataset, the final dataset must be less than 10000 CAPTCHA images because CAPTCHA images containing the same characters are not treated as different CAPTCHAs. ```python # Generate random captcha text def gen_random_text(charset=CAPTCHA_CHARSET, length=CAPTCHA_LEN): text = ''.join([random.choice(charset) for _ in range(length)]) return text # create and store captcha dataset def create_captcha_dataset(size=10000, data_dir='./data/',' height=60, width=150, image_format='.png'): # if store the captcha image, first clear the data_dir directory if gfile.Exists(data_dir): gfile.DeleteRecursively(data_dir) gfile.MakeDirs(data_dir) # create ImageCaptcha instance 'captcha' captcha = ImageCaptcha(width=width, height=height) for _ in range(size): # generate random captcha text = gen_random_text(CAPTCHA_CHARSET, CAPTCHA_LEN) captcha.write(text, data_dir + text + image_format) return None ``` **Figure 23.** CAPTCHA generating methods The figure above are the methods that generate CAPTCHA dataset. The first method aims to create a random combination of four-character string, which needs charset and length of the text as the arguments. The second methods need five arguments: the size of the dataset, the directory of the dataset, the height, width and the image format of the CAPTCHA image. We use the provided height, width and image format arguments to define our CAPTCHA image format, then create a four-character string in random combination. The string is used as the file name for each CAPTCHA containing that certain text. The label for each CAPTCHA is the file name without suffix (.png). Figure below shows how dataset looks like. 4.2 Process Input Data 4.2.1 RGB Format to Grayscale As our CAPTCHA image is RGB (Red Green Blue) format, each image contains many color information. In order to accelerate the training process, we need to reduce the amount of data contained in each CAPTCHA image. The color information is not so necessary, so we convert our RGB image to grayscale, which will reduce the number of color channels from three to one. Before converting color space, we need first load our dataset and convert image to numpy array which Keras only accepts input data in this format. 4.2.2 Data Normalization Data Normalization is to scale the data so that it falls into a small specific interval. Normalizing the data for each batch can improve the generalization of the model and prevent overfitting. It can also improve the convergence speed of iterative solutions. 4.2.3 Fit Keras Channels Keras provides two types of saving data format – “channels-first” and “channels-last”. The default way is “channels first”, it puts the position of channels value before the image height and width in the vector. On the other hand, “channels-last” put the value of the channel at the end position instead. Here we define a method to make our values fit Keras channels format. Batch number is the number of images that be processed together when training the model. 4.3 Process Output Data 4.3.1 One-hot Encoding One-hot encoding, also known as one-bit efficient encoding, uses an N-bit status register to encode N states, each state is independent of its register bits, and at any time, only One is valid. It can be understood that for each feature if it has m possible values, it is monothermally and becomes m binary features. Also, these features are mutually exclusive, with only one activation at a time. Therefore, the data becomes sparse. ![Table: One-hot Encoding Example] We are encoding our labels with this encoding system. After one-hot encoding, the text value will transmit to vector, then use the method from NumPy library to convert this vector to NumPy array fit the Keras format. Besides, transmitting our label text to vector can also accelerate our training process. we create an array of length 10 four times during the loop, the number [0 - 9] occupy one position of the array based on one-hot encoding. After the looping, we combine these 4 arrays into one 40 length array. The figure below is an example. The label “9513” transmitted to an array with 40 lengths after one-hot encoding. 4.3.2 Decoding Output Vector to Text The predicted value our model outputs is an array containing 40 probability values. This cannot be recognized so we should translate them to a four-character text. Ten values make a group, and the highest probability value will be selected as the representative value of these ten. 4 groups have 4 output values, combining these 4 values and converting them to text would consist our prediction. Figure 36. Method of transmitting vector to text We first convert this vector to a NumPy array, then use argmax() method to translate vector value to text. ```python def vec2text(vector): if not isinstance(vector, np.ndarray): vector = np.asarray(vector) vector = np.reshape(vector, [CAPTCHA_LEN, -1]) text = '' for item in vector: text += CAPTCHA_CHARSET[np.argmax(item)] return text ``` Figure 37. Example of vector to text Here is an example of the translating process. The predicted value of the model is an array with 40 probability values, we choose the highest value in every ten items, translate them to text and combine them to a four-character prediction value. ``` model = load_model(MODEL_FILE) print(vec2text(Y_test[534])) #img = rgb2gray(np.array(Image.open(X_test[2488]))) #y2 = vec2text(Y_test[2488]) yy = model.predict(X_test[534].reshape(1, 60, 160, 1)) print(yy) print('prediction: ' + vec2text(yy)) ``` ``` [[4.0866754e-12 1.7032077e-11 1.6760635e-10 2.0114871e-10 5.5554733e-06 5.5692750e-10 1.0205610e-17 1.3384788e-13 1.5542446e-13 9.9999440e-01 2.3645453e-05 2.5510424e-07 5.1437415e-05 6.1783078e-07 9.9518341e-01 8.2850124e-07 7.0844399e-07 2.0723822e-05 5.4019678e-05 4.6643380e-03 0.1542390e-01 2.8178500e-02 8.4868576e-03 6.3412213e-03 4.4892032e-02 1.7976676e-01 3.6642328e-03 4.4203461e-03 2.5968870e-02 8.2857139e-02 4.3378881e-10 1.1058735e-09 2.4945990e-03 2.4452477e-08 1.4293934e-09]] prediction: 9401 ``` 5 MODEL STRUCTURE DESIGN 5.1 AlexNet and VGG-16 Model In 2012, Alex Krizhevsky and Ilya Sutskever designed a deep convolutional neural network AlexNet at the University of Toronto's Geoff Hinton lab, winning the 2012 ImageNet LSVRC championship with an accuracy rate far exceeding the second place, which caused a lot of sensation. AlexNet can be said to be a historical network structure. Before that, deep learning has been silent for a long time. Since the birth of AlexNet in 2012, the latter ImageNet champions have been done with the convolutional neural network (CNN), which makes CNN become the core algorithm model in image recognition classification, which brings about the explosion of deep learning. The success of AlexNet is related to the characteristics of this model design. AlexNet has 3 main features: 1. A nonlinear activation function is used: ReLU 2. Methods to prevent overfitting: Dropout, Data augmentation 3. Other: Multi-GPU implementation, use of LRN normalization layer ![Figure 38. AlexNet Structure [12]](image) In 2014, the University of Oxford's Visual Geometry Group and Google DeepMind researchers developed a new deep convolutional neural network: VGGNet and won the second price in the ILSVRC2014 competition classification project. VGGNet explores the relationship between the depth of convolutional neural networks and its performance and successfully constructs a 16-to-19-layer deep convolutional neural network, which proves that increasing the depth of the network can affect the final performance of the network to a certain extent, and drastically reduce the error rate. Besides, the scalability is very strong, and the generalization of migrating to other image data is also very good. So far, VGG is still used to extract image features. VGGNet can be seen as a deeper version of AlexNet, which also consists of two parts: the convolutional layer and the fully connected layer. Small convolution kernel is an important feature of VGG model. VGG uses a convolutional layer of multiple smaller convolution kernels (3x3) instead of a convolutional layer with a larger convolution kernel. On the one hand, it can reduce parameters, on the other hand, it is equivalent to more nonlinear mapping, which can increase the network fitting/expression capabilities. [13] ![Figure 39. VGG-16 model [13]](image) 5.2 Construct Model In this thesis, VGG-16 model is used to create our convolutional neural network. The network consists of 3 convolutional layers, 3 pooling layers, 1 dropout layer, and 2 full connected layers. The last full connected layer classifies the output into 10 classes for 4 times, and each time the maximum probability is the output value. Then we splice 4 classification results together to get the final result. Figure 40. Flow chart of the CNN Figure 41. CNN model in this project The output is an array containing each value’s probability, we then use vector to text method which introduced above to decoding the value into predicted text. Figure 42. The whole structure of the CNN 6 MODEL TRAINING PROCESS After we establish our CNN model, it’s time to train our model with pre-defined batches and epochs. ![Flow chart of the training process](image) **Figure 43.** Flow chart of the training process In order to select the best optimizer and loss function in this model, 7 combinations are choosing to be trained. Each model saved as a .h5 file, and the corresponding log is also created along with the model file and saved in history folder with the .history suffix. ![Model and history files](image) **Figure 44.** Model and history files We use model.fit() method (provided by Keras library) to start training our model. Figure 45. Training process X_train, Y_train are the images and labels of the training dataset. batch_size, which has been explained above, is the number of images that be processed together. Epochs value is the number of training rounds. After trying several times, 300 rounds are great enough that our model can reach a high accuracy in test dataset within these rounds of processes. Verbose control the log information. 0 means that the log information is not output via the standard output stream, 1 is represented as the output progress bar record, and 2 is represented as the output row record for each epoch result analysis. Validation_data makes model calculate the accuracy and the loss in test dataset in order to overcome over-fitting problem that our model is suitable for more general application scenarios than just performing well on training sets. ```python history = model.fit(X_train, Y_train, batch_size=BATCH_SIZE, epochs=EPOCHS, verbose=2, validation_data=(X_test, Y_test)) if not os.path.exists(MODEL_DIR): os.makedirs(MODEL_DIR) model.save(MODEL_FILE) print('Saved trained model at %s' % MODEL_FILE) - 5s - loss: 0.0754 - acc: 0.9712 - val_loss: 0.0940 - val_acc: 0.9647 Epoch 42/300 - 5s - loss: 0.0743 - acc: 0.9720 - val_loss: 0.0899 - val_acc: 0.9674 Epoch 43/300 - 5s - loss: 0.0727 - acc: 0.9721 - val_loss: 0.0902 - val_acc: 0.9667 Epoch 44/300 - 5s - loss: 0.0716 - acc: 0.9727 - val_loss: 0.0890 - val_acc: 0.9671 Epoch 45/300 - 5s - loss: 0.0710 - acc: 0.9731 - val_loss: 0.0945 - val_acc: 0.9651 Epoch 46/300 - 5s - loss: 0.0701 - acc: 0.9737 - val_loss: 0.0903 - val_acc: 0.9666 Epoch 47/300 - 5s - loss: 0.0686 - acc: 0.9742 - val_loss: 0.0931 - val_acc: 0.9659 Epoch 48/300 - 5s - loss: 0.0693 - acc: 0.9737 - val_loss: 0.0903 - val_acc: 0.9675 Epoch 49/300 - 5s - loss: 0.0689 - acc: 0.9740 - val_loss: 0.0897 - val_acc: 0.9669 Epoch 50/300 - 5s - loss: 0.0680 - acc: 0.9743 - val_loss: 0.0881 - val_acc: 0.9674 ``` 7 RESULT ANALYSIS 7.1 Loss Function The loss function is used to estimate the degree of inconsistency between the predicted value \( f(x) \) of your model and the true value \( Y \). It is a non-negative real-valued function, usually expressed by \( L(Y, f(x)) \). The smaller the loss function, the better the robustness of the model. The loss function is the core part of the empirical risk function and an important part of the structural risk function. The structural risk function of the model includes empirical risk terms and regular terms, which can usually be expressed as follows: \[ \theta^* = \arg\min_{\theta} \frac{1}{N} \sum_{i=1}^{N} L(y_i, f(x_i; \theta)) + \lambda \Phi(\theta) \] Among them, the previous mean function represents the empirical risk function, \( L \) represents the loss function, and the latter \( \Phi \) is a regularizer or a penalty term, which can be L1 or L2, or other regular functions. The whole expression means to find the value of \( \theta \) when the objective function is minimized. 7.1.1 Mean Squared Error Mean Square Error (MSE) is the most commonly used regression loss function. The calculation method is to find the sum of the squares of the distance between the predicted value and the true value. The formula is shown below: \[ \text{MSE} = \sum_{i=1}^{n} (y_i - \hat{y}_i)^2 \] 7.1.2 Poisson Regression The Poisson loss function is a measure of how the predicted distribution diverges from the expected distribution, the Poisson as loss function is a variant from Poisson Distribution, where the Poisson distribution is widely used for modeling count data. It can be shown to be the limiting distribution for a normal approximation to a binomial where the number of trials goes to infinity and the probability goes to zero and both happen at such a rate that np is equal to some mean frequency for the process. [16] \[ L = \frac{1}{n} \sum_{i=1}^{n} (y_{\text{pred}}^{(i)} - y_{\text{true}}^{i} \cdot \log(y_{\text{pred}}^{(i)})) \] 7.1.3 Cross Entropy Cross-Entropy is commonly used in binary classification (labels are assumed to take values 0 or 1) as a loss function (For multi-classification, use Multi-class Cross Entropy), which is computed by: \[ L = -\frac{1}{n} \sum_{i=1}^{n} [y^{(i)} \log(y^{(i)}) + (1 - y^{(i)}) \log(1 - y^{(i)})] \] Cross-entropy measures the divergence between two probability distribution, if the cross-entropy is large, which means that the difference between two distribution is large, while if the cross-entropy is small, which means that the two distribution is similar to each other. [16] 7.2 Optimizer The function of the optimizer is to minimize (or maximize) the loss function \( E(x) \) by improving the training method. Optimization algorithms fall into two broad categories: 1. The first order optimization algorithm This algorithm uses the gradient values of the various parameters to minimize or maximize the loss function \( E(x) \). The most commonly used first-order optimization algorithm is gradient descent. Function gradient is a multivariate expression of the derivative \( \frac{dy}{dx} \) used to represent the instantaneous rate of change of \( y \) with respect to \( x \). Often in order to calculate the derivative of a multivariate function, the gradient is substituted for the derivative and the partial derivative is used to calculate the gradient. One major difference between the gradient and the derivative is that the gradient of the function forms a vector field. 2. Second-order optimization algorithm The second-order optimization algorithm uses a second derivative (also called the Hessian method) to minimize or maximize the loss function. This method is not widely used because of the high computational cost of the second derivative. In each iteration, the gradient descent updates the independent variable along the gradient of the current position based on the current position of the independent variable. However, if the iterative direction of the independent variable depends only on the current position of the independent variable, this may cause some problems. Some researchers have proposed a technique called Momentum, which accelerates non-momentum training by optimizing the training of related directions and weakening the oscillations of irrelevant directions. In the parameter update process, the principle is similar: 1. Make the network more optimal and more stable convergence; 2. Reduce the oscillation process. There are four optimizers that used in the training process: Adagrad, Adadelta, Rmsprop, and Adam. Figure 49. adam & binary cross entropy Figure 50. adam & Poisson 7.3 Loss Function Analysis As we can see from the diagrams above, binary cross entropy performed better in train dataset as it has the highest accuracy and the loss rate also performed in a good way. In test dataset, MSE has a better loss rate compared to other 2 methods, however, binary cross entropy is far more accurate than the other two. 7.4 Optimizer Analysis Figure 55. Model Accuracy comparison by different optimizers in train dataset Figure 56. Model Loss comparison by different optimizers in train dataset Except for Adam methods, those other three reached the highest accuracy and lowest loss rate in training set are quite similar, but Adam has better continuous learning ability than the other three methods. However, after 300 epochs training, in the test dataset, Adam performs far better than the other three with the lowest loss rate. The other three reached a lower loss rate but rebounded to a higher value after about 50 epochs, which means that with Adagrad, Adadelta, and Rmsprop, the model is overfitting with the training set that cannot continue to perform well on the test set or some general situations. By comparing the performance within three loss functions and 4 optimizers, Adam with Binary Cross Entropy is more comprehensive on the test set, which will be used when deploying to real applications. 8 DEPLOYMENT After we got our training model, it’s time to deploy our model in real applications to test its generalization capability. 8.1 Deploy to NAO Robot Deploy our recognition system on NAO robot is quite simple. First, we establish a framework that robot starts to begin the recognition test, then load our model to the robot and make it recognize 15 random generated CAPTCHA images, if the robot recognizes more than half of them, our model is considered performing well. Google Cloud Speech-to-Text is used for robot to detect starting command and ALTextToSpeech protocol is used to make responses. OpenCV library is used to display CAPTCHA images. OpenCV provides cv2.imread() method to read images, one advantage of using OpenCV is that after load image to memory through cv2.imread() method, the images are automatically converted to NumPy array format, which can make it easier to deal with the predicting output. for i in range(15): text, image = random_captcha('./captcha/') image = cv2.imread(image) cv2.imshow(text, image) image = rgb2gray(image).reshape(1, 60, 150, 1).astype('float32') / 255 with graph.as_default(): prediction = model.predict(image) prediction_text = vec2text(prediction) tts.say('Captcha number' + str(i + 1) + ' is ' + prediction_text) print('Prediction by Robot: ' + prediction_text) if i == 0: if text.rsplit('.', 'png') == prediction_text: correct_answer = correct[0] print(correct_answer) tts.say(correct_answer) count += 1 else: wrong_answer = wrong[0] print(wrong_answer) tts.say(wrong_answer) else: if text.rsplit('.', 'png') == prediction_text: correct_answer = choice(correct) print (correct_answer) tts.say(correct_answer) count += 1 else: wrong_answer = choice(wrong) print (wrong_answer) tts.say(wrong_answer) cv2.waitKey(0) **Figure 59.** recognize CAPTCHA image Figure 60. Robot recognizing CAPTCHA images A complete test video can be watched at https://youtu.be/4pxHi9h1caU 8.2 Deploy to A Web Service We also provide an interface of our recognition service on web applications. Flask web framework is used to create our testing application, and CURL command is used to call the service. app = Flask(__name__) # Test URL @app.route('/ping', methods=['GET', 'POST']) def hello_world(): return 'pong' # CAPTCHA recognition URL @app.route('/predict', methods=['POST']) def predict(): response = {'success_received': False, 'prediction': '', 'debug': 'error'} received_image = False if request.method == 'POST': if request.files.get('image'): # image file image = request.files['image'].read() received_image = True response['debug'] = 'get image' elif request.get_json(): # image file encoded by base64 encoded_image = request.get_json()['image'] image = base64.b64decode(encoded_image) received_image = True response['debug'] = 'get json' if received_image: image = np.array(Image.open(BytesIO(image))) image = rgb2gray(image).reshape(1, 60, 160, 1).astype('float32') / 255 with graph.as_default(): pred = model.predict(image) response['prediction'] = response['prediction'] + vec2text(pred) response['success_received'] = True response['debug'] = 'predicted' else: response['debug'] = 'No Post' return jsonify(response) model = load_model(MODEL_FILE) # load model graph = tf.get_default_graph() # get TensorFlow default dataflow Figure 61. Deploy CAPTCHA recognition service to a web server zhuwenhaodeMBP:naoqilibrary wenhao$ curl -X POST -F image=@1908.png 'http://localhost:5000/predict' { "debug": "predicted", "prediction": "7908", "success_received": true } zhuwenhaodeMBP:naoqilibrary wenhao$ curl -X POST -F image=@2949.png 'http://localhost:5000/predict' { "debug": "predicted", "prediction": "2949", "success_received": true } Figure 62. Call a CAPTCHA recognition service through CURL command 9 CONCLUSION In this thesis, a CAPTCHA recognition system is developed, by using convolutional neural network, which is a popular technique in deep learning field and can be deployed to a humanoid NAO robot and web service to execute recognition service. During the establishment and the training process, there are some issues found that can be improved in future development. As the label text is translated to vector by one-hot encoding system, there are 36 of zeros and 4 of ones containing in the NumPy array. Because of this feature, if our model predicts the correct position of each zero, the accuracy would rise to a high level. When I was training the model, accuracy started at 90%, which is not an accurate number for the initial training moment. In the future, maybe another encoding system could be used to solve this problem. Another thing that can be improved is that the CAPTCHA image only contains numbers as the text because of the huge amount of data if we introduced letters and other characters in our dataset ($62^4$ combinations if we introduce uppercase and lowercase letters). It can be improved if computing capability is increased in the future. In this project, establishing the convolutional neural network and designing the training model are the core parts. CAPTCHAs are becoming more and more difficult to crack nowadays, but with the deep learning algorithm developing, more advanced neural network will be developed to cope with increasingly complex CAPTCHA algorithms. 10 REFERENCES [5] Y. James, "[Material analysis & Machine Learning] Speech 5.1: A Convolutional network introduction (Neural Network).," 24 12 2017. [Online]. Available: https://medium.com/jameslearningnote/%E8%B3%87%E6%96%99%E5%88%86%E6%9E%90-%E6%A9%9F%E5%99%A8%E5%AD%B8%E7%BF%92-%E7%AC%AC5-1%E8%AC%9B-%E5%8D%B7%E7%A9%8D%E7%A5%9E%E7%B6%93%E7%B6%B2%E7%B5%A1%E4%BB%8B%E7%B4%B9-convolutional-neural-network-4f8249d65d4f.
{"Source-Url": "https://www.theseus.fi/bitstream/handle/10024/171057/Thesis_WenhaoZhu.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 12319, "olmocr-version": "0.1.53", "pdf-total-pages": 60, "total-fallback-pages": 0, "total-input-tokens": 108987, "total-output-tokens": 16724, "length": "2e13", "weborganizer": {"__label__adult": 0.0004651546478271485, "__label__art_design": 0.0024127960205078125, "__label__crime_law": 0.00045943260192871094, "__label__education_jobs": 0.006992340087890625, "__label__entertainment": 0.0003120899200439453, "__label__fashion_beauty": 0.0003018379211425781, "__label__finance_business": 0.0004432201385498047, "__label__food_dining": 0.00046944618225097656, "__label__games": 0.0010080337524414062, "__label__hardware": 0.002780914306640625, "__label__health": 0.0008301734924316406, "__label__history": 0.0006823539733886719, "__label__home_hobbies": 0.00025272369384765625, "__label__industrial": 0.0007910728454589844, "__label__literature": 0.0006160736083984375, "__label__politics": 0.0003921985626220703, "__label__religion": 0.0006961822509765625, "__label__science_tech": 0.385498046875, "__label__social_life": 0.00029087066650390625, "__label__software": 0.0191497802734375, "__label__software_dev": 0.57421875, "__label__sports_fitness": 0.00030112266540527344, "__label__transportation": 0.0005283355712890625, "__label__travel": 0.00025773048400878906}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 58692, 0.06624]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 58692, 0.77126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 58692, 0.84529]], "google_gemma-3-12b-it_contains_pii": [[0, 67, false], [67, 1513, null], [1513, 2993, null], [2993, 6963, null], [6963, 6963, null], [6963, 8329, null], [8329, 9909, null], [9909, 11966, null], [11966, 13478, null], [13478, 14551, null], [14551, 14929, null], [14929, 15770, null], [15770, 17271, null], [17271, 18160, null], [18160, 18789, null], [18789, 19491, null], [19491, 20722, null], [20722, 22724, null], [22724, 23300, null], [23300, 24385, null], [24385, 25791, null], [25791, 26141, null], [26141, 26206, null], [26206, 26470, null], [26470, 28116, null], [28116, 28992, null], [28992, 29024, null], [29024, 30745, null], [30745, 32532, null], [32532, 33098, null], [33098, 33384, null], [33384, 33785, null], [33785, 34701, null], [34701, 35459, null], [35459, 37030, null], [37030, 38238, null], [38238, 39581, null], [39581, 40046, null], [40046, 40088, null], [40088, 40739, null], [40739, 42785, null], [42785, 44128, null], [44128, 45103, null], [45103, 46571, null], [46571, 47368, null], [47368, 47462, null], [47462, 47635, null], [47635, 47779, null], [47779, 48163, null], [48163, 48421, null], [48421, 48774, null], [48774, 49705, null], [49705, 50779, null], [50779, 51109, null], [51109, 52944, null], [52944, 54453, null], [54453, 55846, null], [55846, 57255, null], [57255, 58692, null], [58692, 58692, null]], "google_gemma-3-12b-it_is_public_document": [[0, 67, true], [67, 1513, null], [1513, 2993, null], [2993, 6963, null], [6963, 6963, null], [6963, 8329, null], [8329, 9909, null], [9909, 11966, null], [11966, 13478, null], [13478, 14551, null], [14551, 14929, null], [14929, 15770, null], [15770, 17271, null], [17271, 18160, null], [18160, 18789, null], [18789, 19491, null], [19491, 20722, null], [20722, 22724, null], [22724, 23300, null], [23300, 24385, null], [24385, 25791, null], [25791, 26141, null], [26141, 26206, null], [26206, 26470, null], [26470, 28116, null], [28116, 28992, null], [28992, 29024, null], [29024, 30745, null], [30745, 32532, null], [32532, 33098, null], [33098, 33384, null], [33384, 33785, null], [33785, 34701, null], [34701, 35459, null], [35459, 37030, null], [37030, 38238, null], [38238, 39581, null], [39581, 40046, null], [40046, 40088, null], [40088, 40739, null], [40739, 42785, null], [42785, 44128, null], [44128, 45103, null], [45103, 46571, null], [46571, 47368, null], [47368, 47462, null], [47462, 47635, null], [47635, 47779, null], [47779, 48163, null], [48163, 48421, null], [48421, 48774, null], [48774, 49705, null], [49705, 50779, null], [50779, 51109, null], [51109, 52944, null], [52944, 54453, null], [54453, 55846, null], [55846, 57255, null], [57255, 58692, null], [58692, 58692, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 58692, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 58692, null]], "pdf_page_numbers": [[0, 67, 1], [67, 1513, 2], [1513, 2993, 3], [2993, 6963, 4], [6963, 6963, 5], [6963, 8329, 6], [8329, 9909, 7], [9909, 11966, 8], [11966, 13478, 9], [13478, 14551, 10], [14551, 14929, 11], [14929, 15770, 12], [15770, 17271, 13], [17271, 18160, 14], [18160, 18789, 15], [18789, 19491, 16], [19491, 20722, 17], [20722, 22724, 18], [22724, 23300, 19], [23300, 24385, 20], [24385, 25791, 21], [25791, 26141, 22], [26141, 26206, 23], [26206, 26470, 24], [26470, 28116, 25], [28116, 28992, 26], [28992, 29024, 27], [29024, 30745, 28], [30745, 32532, 29], [32532, 33098, 30], [33098, 33384, 31], [33384, 33785, 32], [33785, 34701, 33], [34701, 35459, 34], [35459, 37030, 35], [37030, 38238, 36], [38238, 39581, 37], [39581, 40046, 38], [40046, 40088, 39], [40088, 40739, 40], [40739, 42785, 41], [42785, 44128, 42], [44128, 45103, 43], [45103, 46571, 44], [46571, 47368, 45], [47368, 47462, 46], [47462, 47635, 47], [47635, 47779, 48], [47779, 48163, 49], [48163, 48421, 50], [48421, 48774, 51], [48774, 49705, 52], [49705, 50779, 53], [50779, 51109, 54], [51109, 52944, 55], [52944, 54453, 56], [54453, 55846, 57], [55846, 57255, 58], [57255, 58692, 59], [58692, 58692, 60]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 58692, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
39dee184156fb3e49755d2c194a1fbdbf575b50f
[REMOVED]
{"Source-Url": "http://haslab.github.io/TRUST/papers/setta19.pdf", "len_cl100k_base": 11286, "olmocr-version": "0.1.53", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 55613, "total-output-tokens": 12837, "length": "2e13", "weborganizer": {"__label__adult": 0.0003070831298828125, "__label__art_design": 0.0003445148468017578, "__label__crime_law": 0.0002205371856689453, "__label__education_jobs": 0.00041794776916503906, "__label__entertainment": 4.470348358154297e-05, "__label__fashion_beauty": 0.0001252889633178711, "__label__finance_business": 0.0001316070556640625, "__label__food_dining": 0.00027871131896972656, "__label__games": 0.0003306865692138672, "__label__hardware": 0.0004665851593017578, "__label__health": 0.0002772808074951172, "__label__history": 0.0001493692398071289, "__label__home_hobbies": 5.65648078918457e-05, "__label__industrial": 0.0002343654632568359, "__label__literature": 0.00019180774688720703, "__label__politics": 0.00019121170043945312, "__label__religion": 0.00033783912658691406, "__label__science_tech": 0.004749298095703125, "__label__social_life": 7.110834121704102e-05, "__label__software": 0.004199981689453125, "__label__software_dev": 0.986328125, "__label__sports_fitness": 0.00021696090698242188, "__label__transportation": 0.0003037452697753906, "__label__travel": 0.00015246868133544922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47965, 0.02863]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47965, 0.1269]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47965, 0.88055]], "google_gemma-3-12b-it_contains_pii": [[0, 2673, false], [2673, 6251, null], [6251, 7140, null], [7140, 10543, null], [10543, 13914, null], [13914, 15854, null], [15854, 19526, null], [19526, 22591, null], [22591, 24766, null], [24766, 27449, null], [27449, 30023, null], [30023, 30811, null], [30811, 33618, null], [33618, 37188, null], [37188, 40333, null], [40333, 43612, null], [43612, 46499, null], [46499, 47965, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2673, true], [2673, 6251, null], [6251, 7140, null], [7140, 10543, null], [10543, 13914, null], [13914, 15854, null], [15854, 19526, null], [19526, 22591, null], [22591, 24766, null], [24766, 27449, null], [27449, 30023, null], [30023, 30811, null], [30811, 33618, null], [33618, 37188, null], [37188, 40333, null], [40333, 43612, null], [43612, 46499, null], [46499, 47965, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47965, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47965, null]], "pdf_page_numbers": [[0, 2673, 1], [2673, 6251, 2], [6251, 7140, 3], [7140, 10543, 4], [10543, 13914, 5], [13914, 15854, 6], [15854, 19526, 7], [19526, 22591, 8], [22591, 24766, 9], [24766, 27449, 10], [27449, 30023, 11], [30023, 30811, 12], [30811, 33618, 13], [33618, 37188, 14], [37188, 40333, 15], [40333, 43612, 16], [43612, 46499, 17], [46499, 47965, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47965, 0.11111]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
25e991dda1372aabfc9de3b784f905b726b03b7e
Design Alternatives for Process Group Membership and Multicast* Kenneth P. Birman** Robert Cooper Barry Gleeson TR 91-1257 (replaces 91-1185) December 1991 Department of Computer Science Cornell University Ithaca, NY 14853-7501 *This paper is a revision of TR91-1185 (Jan. 1991) **The first two authors are in the Dept. of Computer Science, Cornell University, and were supported under DARPA/NASA grant NAG 2-593. The third author is with the UNISYS Corporation, San Jose, CA. Design Alternatives for Process Group Membership and Multicast Kenneth P. Birman Robert Cooper Barry Gleeson December 18, 1991 Abstract Process groups are a natural tool for distributed programming, and are increasingly important in distributed computing environments. However, there is little agreement on the most appropriate semantics for process group membership and group communication. These issues are of special importance in the Isis system, a toolkit for distributed programming [Bir91]. Isis supports several styles of process group, and a collection of group communication protocols spanning a range of atomicity and ordering properties. This flexibility makes Isis adaptable to a variety of applications, but is also a source of complexity that limits performance. This paper reports on a new architecture that arose from an effort to simplify Isis process group semantics. Our findings include a refined notion of how the clients of a group should be treated, what the properties of a multicast primitive should be when systems contain large numbers of overlapping groups, and a new construct called the causality domain. As an illustration, we apply the architecture to the problem of converting processes into fault-tolerant process groups in a manner that is “transparent” to other processes in the system. A system based on this architecture is now being implemented in collaboration with the Chorus and Mach projects. Keywords: distributed computing, fault-tolerance, Isis, process groups, virtual synchrony, causal multicast, atomic broadcast. 1 Introduction Isis is a toolkit for distributed programming that provides a set of problem-oriented tools built around process groups and reliable group multicast [BJ87, BSS91]. Process groups are a natural abstraction and have been used in a number of distributed systems [CZ85, OSS80, KTHB89]. This paper is a revision of TR 91-1185 (Jan. 1991) *The first two authors are in the Dept. of Computer Science, Cornell University, and were supported under DARPA/NASA grant NAG-2-593. The third author is with the UNISYS Corporation, San Jose, Ca. LLS90, PBS89, AGHR89]. However, the precise characteristics of group facilities differ among these systems, as do the protocols employed to implement them. The primary goal of this paper is to sort through the design choices at this level, arriving at a process group architecture that is simple, powerful and appropriate. A secondary goal is that the architecture should admit elegant solutions to classical problems in this area, such as transforming a program into an equivalent fault-tolerant one, without sacrificing efficiency. As evidence in support of our arguments we show how the architecture can be used to derive a simple fault-tolerance transformation. Consistent with our goals, the solution would (theoretically) perform as well as the best known solutions to this problem. Despite the fact that it would achieve high levels of concurrency, the solution is fully described at a high level and is surprisingly easy to understand. Our analysis draws on experience with the Isis system, which has been distributed to hundreds of sites since the first public software release in 1987. Isis is presently used in diverse settings such as brokerage and banking applications, value-added telecommunications systems, wide-area seismic data collection and analysis, factory floor automation, document flow, distributed simulation, scientific computing, high-availability file management, reactive control, database integration, education and research [BC90]. Through participation in the design of a number of these distributed systems, we have gained insight both into the successful aspects of the technology, and those in need of further work. Successful Isis applications often share two characteristics: - **They depend on consistent, distributed process group state.** Isis provides tools for reading and writing replicated data, adapting to failures, transferring group data to new members, and viewing group membership. Many Isis applications using these tools rely on the guarantee that group members see *mutually consistent* sequences of updates for replicated information, and that a process can join the group and obtain its “current” state without possibly missing an update or seeing one twice. This property is useful for more than just replication of data. For example, group members are able to react to external events in a coordinated way, treating the group membership list as a form of data replicated among the members without running an additional agreement protocol. - **They employ large numbers of groups.** Isis was designed assuming that typical applications would be organized into some (small) number of fault-tolerant distributed servers, each implemented using a single process group. However, many Isis users seized upon groups as a fine-grained structuring construct, building applications with large numbers of overlapping groups. This trend motivates several of the architectural changes discussed below. Groups are used in a variety of ways in Isis applications: - **Groups used for fault-tolerance.** Here, some of the components of a system are transparently replaced by fault-tolerant process groups that mimic the original components. As we demonstrate in Section 5, our architecture permits this to be done without changing programs that interact with the modified components. - **Groups as services with clients.** In this case, group members provide services to client programs, either in a request-reply style, or through a registration interface with repeated callbacks (e.g. a broker's workstation might subscribe to a stock price publication service, receiving callbacks each time the price changes). Multi-level servers are common, with the processes that implement one service registering as clients of other services. - **Process groups for distributed or replicated objects.** In these applications, an object is typically an abstract data type with small state\(^1\) that may change rapidly. Reasons for replicating objects include improved fault-tolerance, and increased performance through concurrency or coherently replicated data. - **Groups used for parallel programming.** Several scientific computing projects have employed Isis to obtain coarse grained parallelism and fault-tolerance in simulations and graphics applications, running on networks of high-performance workstations. - **Groups used for fault-tolerant, distributed system management.** Isis has been used in application-oriented monitoring and control software for high-reliability, autonomous, distributed systems. The underlying application will often make no explicit use of Isis, although hooks may be included to permit the monitoring system to intervene when necessary. The numbers and uses of groups differ substantially from our original expectations, dating to when Isis was first developed. This has brought into question several of the basic assumptions underlying the initial architecture, leading us to ask how the system might need to be re-designed to simplify future development, improve performance and exploit emerging operating systems and hardware technologies, such as communication devices supporting high-speed multicast. This paper focuses upon the following questions: - Why is explicit system support for process groups and group communication necessary? \(^1\)Larger database-style objects would normally be managed using conventional database packages. Isis tools can be combined with such packages, and a mechanism for dealing with databases is included within the toolkit. • What types of groups are needed in distributed systems, and what patterns of client-server interactions should be supported? • What should be the semantics of communication and membership in a single process group? • How should these semantics be extended to multiple, overlapping groups? • How can a process group system take advantage of the emerging generation of modular operating systems? We note that although the paper is intended to be self-contained and to define the terminology used, the issues considered here arise from the many, often contradictory, approaches to process groups and group communication that have been advanced. This results in a somewhat abbreviated presentation of some of the alternatives, and may make the paper difficult to read without some prior knowledge of the field. 2 Process groups This section refines our terminology and confronts the first of the design questions: at what level groups and multicast should implemented. 2.1 Group membership A process group is a collection of communication endpoints that can be referenced as a single entity. Communication endpoints can be implemented in a number of ways. In Unix, each Isis process creates a socket which can be referenced by its internet address. A communication endpoint would correspond to a send-right in Mach, an entity-ID in the V-System, a port UI in Chorus, or a capability in Amoeba. We assume multiple threads sharing an address space (i.e. a process in Unix or Amoeba, a task in Mach, or an actor in Chorus). This permits an address space to own several communication end-points, thus decoupling us from any specific model of processes or memory. Following the conventions of other group-based projects and the original Isis implementation, we will continue with the term process group, in this paper, rather than port group. However, our new architecture does allow multiple end-points per process. 2.2 Why provide support for process groups? The process group membership mechanism comprises the algorithms used to support joining and leaving groups, and to query the current membership list. One might ask whether these operations are more appropriately realized at the application level, or in a shared software subsystem such as Isis. Three issues arise: the importance and generality of the group mechanism, the performance implications of an application-level implementation, and the complexity of the solution. - **Standardization.** In Isis applications, process groups are a basic and heavily used programming construct. Assuming that a single, general mechanism can support a diverse user community without becoming encumbered by numerous special features – and we will argue that this is so – standardization has obvious benefits. - **Complexity.** The protocols required to support process groups are subtle and difficult to implement correctly. If non-experts are to use group-based programming structures, such as replicated data, there may be no choice but to implement the group mechanism in a shared subsystem. - **Performance.** The complexity of the protocols implies that it will be difficult to make all the necessary engineering decisions and performance trade-offs correctly. For example, it is by no means clear *a-priori* whether membership lists should be replicated at all group members, or cached at some smaller set of sites. In fact, we believe that there are strong technical and performance arguments in favor of a direct replication approach, but these arguments come down to engineering considerations that a typical user of a system might not be knowledgeable enough to make.² In a shared software subsystem, these issues would be addressed by the implementor of the subsystem – not by the authors of the applications that use the subsystem. This is desirable because it permits the largest possible set of users to benefit from the insight of a small, expert group of designers. Our preference for a system-supplied mechanism that explicitly manages group membership and replicates this information directly among the members may seem unrealistically biased in favor of making communication cheap at the expense of a more costly group membership facility. One might question this choice. As a matter of fact, we are familiar with applications in which changes to group membership are more frequent than communication. Fortunately, it is generally possible to convert “membership intensive” applications into communication intensive ones. For example, consider an application in which messages are sent to the set of idle servers in a compute service. If servers perform short tasks, membership in this group could vary rapidly. On the other hand, the full set ²Our work on Isis employs protocols in which replicating membership information has important performance advantages. But, it has taken us years of protocol design, implementation, and experimentation to arrive at this conclusion, and it is unlikely that a typical programmer would employ the best known solution if this is at all complex. of servers probably changes slowly. Our experience suggests rapidly changing ad-hoc groups are almost always subsets of more stable enclosing groups. Given a system in which group membership changes are relatively costly but communication is cheap, a cost-effective solution would be to have the server group treat the "subset of lightly loaded servers" as a form of dynamically updated replicated data. Changes to the subset will now be cheap. We conclude that a system-level group facility is needed, and that accurate knowledge of group membership should be available to processes that commonly initiate multicasts to the group. 2.3 Which processes should be allowed to send to a group? In some systems [LLS90, PBS89] only members of a group may multicast to it. This simplifies group management but does not reflect the way programmers use groups, at least in Isis. In such an approach, client programs that wish to communicate with a service implemented by a group must either join the group (which does not scale well), or use point-to-point communication with individual group members (requiring the application programmer to implement a non-trivial protocol, and in particular to solve a difficult fault-tolerance problem in the case where the "agent" fails). We believe that process groups will often have both members and clients, and hence that this issue will be commonly encountered in any system supporting group programming. For example, a common use of groups in future distributed systems will be to make a system component fault-tolerant using replication (we give a protocol for this in Section 5). Here, the fault-tolerant program will be the group, and the programs that interact with it will be clients. Moreover, one would not wish to require that such clients be aware that they are interacting with a group, as opposed to a single entity. We conclude that a client-server model should be supported, in which clients can communicate reliably and transparently with groups. Implications of supporting such a notion of "clients" will be examined in depth in Section 3.1. 2.4 Should group multicast provide "strong guarantees"? Early work on process groups, such as the work in the V-system [CZ85], provided best-effort communication guarantees. Given a process group with stable membership, and assuming that nothing fails and that the communication subsystem is working reasonably well, the V multicast delivers a message to all group members. If any of these guarantees is not satisfied, some members might not receive a message. Moreover, the order in which messages are delivered can differ from member to member. Isis differs from the V-system in adopting a multicast layer with very strong semantics: a program that uses Isis multicast knows exactly what to expect. We believe that this is one of the major reasons that Isis has turned out to be so easy to program in comparison with V, where the group multicast facility was used primarily to locate resources. However, it is not enough to simply accept that a multicast should provide strong guarantees. Multicast can be presented in many ways, and with many sorts of guarantees. What options exist at this level of a system? Various models of multicast interaction have been proposed: asynchronous, all-reply, one-reply, $k$-reply, and so forth. Isis supports all of these and our users have found them all important. Moreover, a group may receive multiple multicasts concurrently, or a stream of multicasts from a single sender. For this reason, communication primitives often provide system-enforced ordering properties.\footnote{In this paper, we consider only asynchronous systems, in which any timing constraints or deadlines are weak with respect to communication performance. Realtime communication protocols, such as the ones described in [CASD85], impose stringent timing requirements upon the operating system and frequently obtain determinism by introducing delays and idle periods. Few current Isis applications need deadlines or priorities, hence we have chosen to concentrate on “logical” properties, such as delivery ordering and atomicity, in this paper.} Other potentially important properties include failure atomicity, namely all-or-nothing delivery guarantees even if processes or processors fail during a multicast, and membership atomicity, namely the guarantee that group membership changes are totally ordered and synchronized with group communication. Figure 1 illustrates two extremes for group communication. In an unordered execution no atomicity guarantees are provided. In a closely synchronous execution, one event occurs at a time, and multicast messages are delivered atomically to the full membership of the group at a single logical instant, during which both address expansion\footnote{We use the term address expansion to refer to the phase of a multicast during which the system determines the group members to which a message will be delivered.} and delivery occurs. The virtually synchronous execution model supported by Isis is indistinguishable from a closely synchronous execution for a correct program, but relaxes synchronization to improve performance. Multicast ordering and atomicity issues are discussed more fully in Sections 3.2 and 3.3 In section 3, we will discuss these options in some detail. To anticipate the conclusion of this discussion, we will argue that strong guarantees are important in most process-group based software. Lacking them, a system will be incapable of supporting important classes of applications. On the other hand, we will also suggest that unsophisticated users can (and should) be presented with a default form of multicast with very simple semantics. The idea is that naive users should employ a multicast primitive that is likely to behave as they would expect, while sophisticated users and subsystems will need flexibility to achieve the highest possible performance. 2.5 Why not layer group communication over RPC? A frequently-asked question concerns whether group communication should be implemented over RPC. Many current operating systems are RPC-based, and this protocol is often highly optimized and well supported. For this reason, if a user were to implement a multicast protocol at the application layer, this would have to be done over either RPC, a stream protocol such as TCP (which makes little sense),\(^5\) or a datagram protocol such as UDP which, because it is unreliable and infrequently used, poses many practical obstacles.\(^6\) Moreover, many styles of group communication are essentially generalizations of RPC, and many of the techniques used to support RPC carry over to group multicast protocols. Thus, it may seem natural to layer protocols such as group multicast over RPC, and to “grease the skids” so that RPC will be as fast as possible. In principle, one could build a reliable multicast protocol over an RPC transport, and a group mechanism over this multicast. Given transactional RPC [LS83, Spe85], such a multicast could \(^5\)The problem with implementing multicast over TCP is that TCP is optimized for continuous, stream-style transmission of large quantities of data from one source to one destination. The protocol is mismatched with a burst, one-to-many communication pattern – a criticism that would not apply to RPC. The same comments apply to X.25, the OSI stream protocol. \(^6\)RPC protocols automatically deal with message loss and retransmission, fragmentation of large packets into small ones, etc. All of these problems would have to be addressed by hand in a protocol layered over UDP, or the equivalent OSI datagram protocol. be made atomic, with parallel threads (lightweight processes) doing RPCs to deliver the messages, and using a two-phase commit to ensure atomicity. Of course, such a solution would also need to address the concerns of the remainder of this paper: multicast ordering, synchronization of multicast address expansion with group membership changes, etc. A protocol with predictable behavior in all of these respects would be no simpler over RPC than any other technology. The question, therefore, is one of performance. Of special interest to us are applications that use asynchronous group communication to achieve high performance. Communication is synchronous if it follows a request-reply style, whereby the thread that sends a message blocks waiting for the reply. Asynchronous communication arises when the sending thread does not block and no reply message is sent. Although underlying message transport layers still need to exchange acknowledgement and flow-control messages, these impose little overhead and do not delay the higher-level protocols, or require further synchronization in the application. Asynchronous communication has an obvious performance benefit if no replies are needed from the destination processes. This benefit becomes a necessity when the number of destinations grows large, because of the cost of collecting superfluous replies at the requester. Implementing an asynchronous multicast communication protocol over an RPC layer would cause severe congestion at the sender. A second factor is that multicast hardware would be very difficult to exploit from an RPC-based implementation. A third concern would be the potentially large amount of memory needed for the stacks of the threads associated with pending multicasts on the sender side: as many as one thread per destination per multicast. Thus we conclude: - Group membership management and group communication are commonly used services that should be implemented once, in a common shared subsystem. - Multicast should be implemented over asynchronous message passing or transport-level multicast. 2.6 Does multicast belong inside the operating system? There remains the question of whether multicast support should exist in the operating system or in a shared user-space library. The key issue, again, is performance. For good performance multicast should be implemented "near the wire"; in other words, the latency of network device interrupts should be minimized. To fully justify this claim we would need to review the protocols that have been offered in support of group multicast, an exercise that would exceed the scope of this paper. Briefly, though, any protocol for group multicast will involve delaying some messages and exchanging background messages of one sort or another. It follows that if all protocol messages must reach the user’s address space, an expensive cross-address space call will have to be done (perhaps even a scheduling action and several context switches) just to deliver a message that might not trigger execution of any application-related code. The cost savings of putting at least the core functionality of the multicast mechanism in the operating system can thus be substantial. Experimental work that has placed some form of multicast directly in the operating system shows that startling performance gains are attainable using this approach [DC90, KT91, PBS89]. These systems are as much as 25 times faster than the current UNIX-based Isis implementation, despite the fact that multicasts in this version of Isis substantially outperform other UNIX-based multicast protocols with which we are familiar [BSS91]. On the other hand, multicast will not be needed by every operating system user, so we should not require nor expect every operating system to provide it. Thus we are attracted by modular operating systems [AGHR89, Ras86] in which a small kernel and a collection of operating system modules communicate using fast inter-module calls. In this way, group and multicast support can be provided in a separate, optional operating system module. We conclude that where possible (and notably in modular operating systems) group and multicast should be provided in a separate operating system module. 3 Detailed design choices for a single group The goal of this section is to explore, in detail, the choices for group and multicast semantics within a single group. Section 4 explores issues raised when multiple groups co-exist in a single application. 3.1 Group structure: Members and clients In Section 2.3 it was suggested that processes outside a group will often need to interact with the group as a single entity. From experience with Isis users, we have identified four group “structures” that frequently arise in Isis programs (Fig. 2). Each responds to a different programming need. A peer group is composed of a set of members that cooperate closely. Fault-tolerance and load-sharing are dominant considerations in these groups, which are typically small. In a client-server group, a potentially large number of clients interacts with a peer group of servers. Requests may be multicast or issued as RPCs to some favored server after an initial setup. The servers either respond to requests using point-to-point messages, or use multicast to atomically reply to the client while also sending copies to one-another. The latter approach is useful for fault-tolerance: if a primary server fails, multicast atomicity implies that a backup server will receive a copy if (and only if) the client did. Thus, a backup server will know which requests are still pending. A special case of client-server communication arises in the diffusion group, which supports diffusion multicasts. Here, a single message is sent by a server to the full set of clients and servers. In current Isis applications, diffusion groups are the only situations in which a typical multicast has a large number of destinations. The use of multicast hardware to optimize this case is thus attractive. These three cases are easily distinguished at runtime in Isis. The only explicit actions by the programmer are to register as a member (using the pg_join system call) or client (pg_client), and to designate diffusion multicasts using an option to the Isis multicast system call. A single group may operate in both client-server modes simultaneously. The last common group structure is the hierarchical group. In large applications with a need for sharing, it is important to localize interactions within smaller clusters of components. This leads to an approach in which a conceptually large group is implemented as a collection of subgroups. In client-server applications with hierarchical server groups, the client is bound, transparently, to a subgroup that accepts requests on its behalf. A root group is responsible for performing this mapping, which is done using a stub linked into the client’s address space that routes messages to the appropriate subgroup. The root group sets up this binding when a process becomes a group client, and may later re-bind the client to a different subgroup. Group data is partitioned so that only one subgroup holds the primary copy of any data item, with others either directing operations to the appropriate subgroup or maintaining cached copies. Multicast to the full set of group members is supported, but but its use is discouraged in this architecture. For brevity, we omit detailed discussion of one-time client-server interactions, and groups used only to monitor membership, but never for communication. Both merit special treatment in an implementation. For example, a large membership-only group should be supported as a client-server structure, minimizing the number of processes informed on each membership change. The servers would be informed of monitoring requests and would only communicate with a client when a monitor is triggered. Explicit support for these group structures is important for performance and scaling. Clients are more numerous than members, but clients of a group never communicate with each other via that group. This fact can be exploited to reduce the amount of information maintained per-client, and permits clients to be omitted from most group coordination protocols. If clients are treated as fully fledged group members (as required in most group-based systems) then groups may not provide sufficient performance for many applications. 3.2 Atomicity In Section 2, we suggested a need for multicast primitives supporting strong semantics. In this section, we begin a more detailed examination of the options by looking at the question of atomicity. As stated earlier, a process group system may support two forms of atomicity: membership atomicity and failure atomicity. The first provides the illusion of group membership that changes instantaneously as members join, leave or fail. The second ensures that multicasts interrupted by a crash will be transparently terminated. Isis supports both properties, and these have proved important to users of the system. Consider first the atomicity of group join/leave/fail. It is difficult to program with process groups in which the expansion of a multicast address from a group address to a list of members is not atomic (i.e. there is no guarantee concerning exactly which processes received a particular multicast, as illustrated in Fig. 3.a). In Isis, this is guaranteed to be the complete membership of the group, defined at a logical instant when delivery occurs (Fig. 3.b). Similar comments apply to failure atomicity. Process group algorithms are greatly simplified by the ability to send a multicast without the concern that an unlikely event, such as a crash, will result in partial delivery. When a group member fails, Isis further guarantees that other processes will receive the failure notification only after having received all outstanding messages from the failed process, and that failures leave no gaps in the causal message history. These properties eliminate bizarre failure sequences, such as the delivery of a message from a process after system state maintained for that process has been garbage collected. Although some systems, notably the V-System, have developed applications using non-atomic group semantics, the primary use seems to be in name services that use multicast for service location. In this context, the consequences of a missed reply or an inaccurate membership protocol are simply an occasional loss of performance. Isis tools and applications build other forms of atomicity on top of the membership and failure atomicity semantics of groups. For example, the Isis state transfer tool copies data from an existing group member to a joining process. (The application designer determines what the state should include.) State transfer is a key to supporting groups with consistent distributed state. However, it is important that the state transferred correspond to the programmer's notion of group state at the (logical) instant of the join. Obtaining this property requires that state transfer be synchronized with the reception of messages that might change the state. Specifically, all messages sent to the group before the new member was added must be delivered before the state is sent. Messages delivered to the group after this event must include the new member. Finally, the event by which the old and new members are informed of the membership change (through a callback) must be coordinated to occur at the same point in the execution of each. We believe that, in the absence of strong atomicity properties, it would be impossible to define (much less implement) state transfer. Membership atomicity is useful for another reason: it gives process group members implicit knowledge about one-another's states. This permits each group member to use the same deterministic function for choosing the primary site in a data replication algorithm, or for subdividing work in a parallel computation, for example. Because of membership atomicity, this function operates only on local data (the synchronized group membership list) but achieves group-wide consistency. Several Isis tools are driven by atomic group membership changes, making no use of any other communication between group members. We conclude that in systems like Isis, membership atomicity and failure atomicity are both needed. ### 3.3 Causal and total multicast orderings In Section 2.4, we observed that there are many possible multicast delivery ordering guarantees. This section focuses on the choice between causal and total ordering in a single group, while the following sections examine multicast ordering in systems with large numbers of possibly overlapping process groups. Although Isis supports a number of multicast ordering alternatives, application builders are primarily concerned with two of these, _cbcast_ and _abcast_. The _cbcast_ protocol delivers messages in the order they were sent, (the _causal_ or _happens before_ order that is natural in distributed systems [Lam78]). For example, in Fig. 1.b, multicast a causally precedes multicasts b and c, but b and c are concurrent. _Cbcast_ would therefore deliver a before b or c, at all destinations but the relative delivery order used for b and c would be unconstrained and might vary from process to process. That _cbcast_ does not order concurrent multicasts is not necessarily a drawback. Often, application-level synchronization or scheduling mechanisms are used to serialize conflicting operations: further serialization of multicasts is superfluous. _Cbcast_ is attractive in such cases, because there is no built-in delay associated with the algorithm. In fact _cbcast_ never delays a message unless it arrives out of order. The _abcast_ protocol delivers messages to group members in a single mutually observed order. Referring to Fig. 1.b, this implies that processes s1, s2 and s3 would receive multicasts a, b and c in the same order. This extra ordering comes at a significant cost: _any abcast_ protocol delays some (or all) messages during the period when this order is being determined. For example, in one common implementation of _abcast_, recipients of a message wait for an ordering _message_ from a distinguished _sequencer_ process. The nature of the delay varies from protocol to protocol, but the presence of a delay of this sort is intrinsic to the _abcast_ ordering property. The performance implications of using \texttt{abcast} instead of \texttt{cbcast} The extra delay with \texttt{abcast} can lengthen the critical path of a distributed computation. In a common usage of multicast, a process multicasts an operation to a group that includes itself, and upon receiving its own multicast performs the operation. By acting on the operation after it has received its own multicast the process is certain that it is performing the operation in an order consistent with the other members of its group, and that the other members are guaranteed to receive the multicast and could take over the operation should this process fail (because of failure atomicity). Where \texttt{abcast} is used, the sending process may not act on the message until a total ordering for delivering it has been decided. Unless the sender is also the sequencer (which is not generally the case) this delay will involve a remote communication. In contrast a \texttt{cbcast} implementation need never delay delivery of the message at the sending process, and in general delivery at one destination is never delayed because of slow response at another destination. In this sense, a \texttt{cbcast} implementation can be optimal. Schmuck has shown that distributed algorithms can be built primarily from \texttt{cbcast} [Sch88, BJ89]. This is done by demonstrating that most algorithms can be recoded in a style that enforces mutual exclusion between conflicting operations, for which \texttt{cbcast} suffices. In Isis, this transformation is used extensively for performance reasons: the \texttt{abcast}-based algorithms may be simpler to understand, but are often much slower. In particular, the latency between transmission and delivery of an \texttt{cbcast} is at least a factor of two smaller than for \texttt{abcast}. Moreover, at the \texttt{sender}, the difference can be a factor of one hundred or more. The problem is that if the sender needs a copy of its own message, in the same order as the other group members will see it (i.e. for a replicated update), \texttt{abcast} will block while \texttt{cbcast} can be used without blocking. This is because \texttt{abcast} has to deal with the case where two senders concurrently communicate to the same group. Even if this is uncommon, \texttt{abcast} cannot deliver the message to any destination until it is known to be the “next” one, and this requires some communication with other potential senders: In contrast, \texttt{cbcast} can be delivered immediately at the sender. Distributed systems, and indeed computing systems of all sorts, are notoriously bursty: often there will be very few active threads. By blocking the sender of a multicast, \texttt{abcast} may delay one of the only things going on in the entire system! Thus, in applications where the sender of a multicast is also a destination, the benefit of using \texttt{cbcast} instead of \texttt{abcast} can be dramatic. To summarize, we have identified a two-level issue. First, asynchronous systems are likely to outperform synchronous systems by a substantial factor (in the current version of Isis, as much as one to two orders of magnitude). Second, given a system that uses multicast communication, the \texttt{cbcast} delivery ordering property will be substantially cheaper to provide then the \texttt{abcast} property, and this is true regardless of whether the sender uses the protocol synchronously or asynchronously. The pervasiveness of causality obligations \textbf{Abcast} may seem strictly stronger (more ordered) than \textbf{cbcast}, since concurrent multicasts are ordered. However, \textbf{abcast}, in most definitions, is actually not required to use an order consistent with causality. Consider a process that sends two asynchronous \textbf{abcast} messages. It would be normal to expect that these be delivered in the order sent, and most \textbf{abcast} protocols have this property in the absence of failures. However such a non-causal (or "mostly causal") \textbf{abcast} should not be used asynchronously because it does not guarantee this property. For these reasons we believe that \textbf{abcast} should support both a total and a causal order. Such a \textit{causal abcast} protocol can be built over \textbf{cbcast} [BSS91].\footnote{Those familiar with the previous Isis work will wonder where the \textbf{gbcast} protocol fits into this. In the original versions of Isis, \textbf{abcast} and \textbf{cbcast} were completely unordered with respect to each other. \textbf{Gcast} was totally ordered with respect to both \textbf{abcast} and \textbf{cbcast}, and was needed to implement group membership atomicity. However, some applications also used the protocol. The equivalent of \textbf{gbcast} is still present within the group join mechanism, and is implemented using a \textbf{cbcast} that triggers a group flush prior to deliver. However, we have determined that Isis users who employed \textbf{gbcast} at the application level generally could have obtained the same effect using a causally ordered \textbf{abcast}, and that given this primitive, \textbf{gbcast} can be viewed as a purely internal mechanism. This simplifies groups as seen by users.} In discussing the option of building multicast over RPC, we stressed the need for asynchronous communication, and in the discussion of the previous section reiterated this issue. Indeed, delay is often the most serious threat to performance in distributed systems. Delays are especially apparent in applications that maintain replicated data using read and write operations, with a locking or token passing scheme used to avoid conflicts. Any delay when doing a read or write operation may be visible to the user of such an application. On the other hand, the latency before all replicas are updated is invisible unless it impacts on read or write response times, or on availability. Using a causally consistent communication protocol, one can code completely asynchronous replicated data management algorithms—regardless of whether that protocol is \textbf{abcast} or \textbf{cbcast}. The user programs as if updates were synchronous, and the causal ordering property, combined with failure atomicity, ensure that the execution respects this logical property [BJ87, BJ89, Sch88, LLS90]. Equally, a protocol that might violate causality is unsafe for asynchronous use, even if it still provides a total order. Unless causal obligations are observed, the initiator of an operation must wait until completion of the operation is acknowledged before proceeding. Otherwise the total order might enforce an arbitrary serialization that violates causality. By the same reasoning, it must be possible for point-to-point communication in a process group setting to convey the causality obligations. For instance in a computation spanning two processes, one process may initiate an asynchronous multicast, and then send an RPC to the other process, which initiates a second asynchronous multicast. The second multicast should causally follow the first. In Isis a point-to-point \texttt{cbcast} achieves this effect. \textbf{Message stability} The use of asynchronous communication raises an additional problem of message stability. A message is said to be \emph{k-stable} if its delivery is assured provided that no more than \( k \) failures occur, and is \emph{stable} (where \( k \) is omitted) if delivery is certain to occur. For example, suppose that a process, \( p_1 \), sends multicast \( a \) to processes \( p_2 \) and \( p_3 \). Process \( p_2 \) receives \( a \) and sends multicast \( b \) to \( p_3 \). If \( a \) was not stable at the time of its delivery to \( p_2 \), the failure of \( p_1 \) might prevent \( a \) from (ever) being delivered to \( p_3 \). This represents a form of communication deadlock, since messages from \( p_2 \) to \( p_3 \) will now be delayed indefinitely. A related issue arises if process \( p_2 \) takes an externally visible action based on the reception of \( a \). Here, it may be that \( p_2 \) should delay the action until \( a \) and its causal predecessors are stable, since failures might otherwise create a situation in which an irreversible action was taken but no operational process in the system realizes this. Although these problems can be avoided by delaying delivery of a message until it and all of its causal antecedents are stable, this introduces a tradeoff between the levels of performance and safety needed in the application. We favor allowing messages to be delivered before they become stable, and providing a per-group \texttt{pg_flush} operation that delays the caller until stability is achieved for any asynchronous messages pending in the group, and for their causal predecessors. We are also considering a system call to specify the stability parameter \( k \) for a given group. An analogous problem arises in file systems, when output to a disk is cached or buffered, and is typically solved in a similar way by providing a system call such as the Unix \texttt{fsync} operation. \textbf{Summary} To summarize the arguments in this section: \begin{itemize} \item Asynchronous operations are the single most important factor in obtaining good performance in distributed systems, regardless of the underlying communication primitive. \item Asynchronous operations create causal delivery obligations, hence group communication should respect causality. \end{itemize} • **Cbcast** is used to implement causal **abcast**, hence it should be the core communication protocol in our process group architecture. • (Causal) **abcast** is slower than **cbcast** and should be avoided by sophisticated users. Less sophisticated users find **abcast** easier to understand and should avoid **cbcast**. • The message stability problem closely resembles the problem of flushing file system I/O buffers, and is readily addressed by providing a user-callable flush primitive. 4 Ordering properties that span group boundaries The *Isis* system is notable for enforcing multicast ordering properties across group boundaries. Here we re-evaluate the usefulness of these semantics, while considering their cost and complexity. 4.1 Who uses overlapping groups? Many Isis applications employ multiple, overlapping groups. In object-oriented applications group overlap is often carried to an extreme. Here, each program is typically composed of some set of objects, and any object that maintains distributed state is implemented by a group. A single process may thus belong to many groups. Large numbers of groups also arise when Isis is used for transparent fault-tolerance in the process pair style [Bar81], with a shadow process backing up each real process. Here, each communication entity in the system is represented by a group containing two members: a primary and a backup. Some Isis applications superimpose multiple groups on the same set of processes. For example, in a stock trading application, a service that computes bid/offered prices for a stock (a diffusion group) might also provide historical information on demand (a request-reply interaction). Moreover, individual processes within the server set may well subscribe to other services. 4.2 Should causality be preserved between groups? Consider a graphics application that uses a blackboard object, containing the scene model, and a task-queue object, specifying views to be rendered (see Figure 4). Both objects allow asynchronous updates. A typical execution sequence involves posting data about a problem on the blackboard and then adding new tasks to the task list. Idle servers remove these tasks and consult the blackboard for scene data. For fault tolerance or performance reasons, the blackboard and the task bag might both be implemented as process groups. Let us call the blackboard group $B$ and the task bag group $T$. Group $B$ has some number of members, and at least two clients: Program 1 ($p1$) and Program 2 ($p2$). Similarly, group $T$ has $p1$ and $p2$ as clients. Thus, these two groups overlap at $p1$ and $p2$. For correct behavior, it is essential that when server $p2$ consults the blackboard (step 4 in Figure 4), it finds the data that $p1$ posted before putting $p2$'s task in the task bag. There are two ways this could be accomplished: - Make $p1$ wait at step 1 until it receives an acknowledgement from group $B$, indicating that the parameters have been posted, before adding the task to the task bag, or - Make $p2$ wait at step 4 if the blackboard update from step 1 is not yet complete. These two solutions perform very differently. It is highly unlikely that the blackboard update will not be complete at the time $p2$ executes step 4. The first solution delays $p1$ every time it posts data to the blackboard, just to cover the unlikely case. The second solution only delays execution of $p2$ when absolutely necessary, and never delays $p1$ (except possibly for flow control reasons). Of course, to implement the second solution, there must be some way to recognize that the message sent by \( p2 \) at step 4 causally follows the message sent by \( p1 \) at step 1. This causality obligation in group \( B \) must somehow be propagated through the task bag (group \( T \)). In general, \texttt{cicast} is used to ensure that sequences of causally related message events are processed in order. Where overlapping groups are concerned, the question is whether causal ordering should be enforced when a chain of events leaves some group, spans other groups, and then some operation re-enters the original group. This situation is schematically depicted in Figure 5. Here, the conflict arises within a \textit{single} group, between the original operation and a later, causally dependent one. In a sense, each chain of causally related events represents an execution sequence, similar to a thread of control, that must be honored. Our belief in an asynchronous style of computation argues that causality should be preserved here. 4.3 \textbf{Should causality always be preserved between groups?} Suppose that the author of our graphics task bag and blackboard application decides to include a debugging facility. This debugger should be able to halt execution of the entire application, then provide the user with facilities for probing the state of each of the application’s processes. Suspending execution could be done with an asynchronous multicast to a group containing all the processes to be debugged. The debugging process could then communicate with the various application processes via RPC—in invoking special state reporting code in each process. Suppose that the new debugger is invoked in the situation described in the previous section. Execution is halted just before server \( p_2 \) consults the blackboard (step 4 in Figure 4.) Suppose further that server \( p_1 \)'s asynchronous blackboard update (step 1 in Figure 4) has not yet been posted. If interactions with the debugger respect causality, the debugger is now in a very difficult position. If it interrogates the state of server \( p_1 \) or \( p_2 \), or the task bag, it will lose its ability to interact with the blackboard. The problem is that when the debugger receives a message (e.g. an RPC reply) from \( p_1 \), \( p_2 \) or the task bag, the debugger's execution becomes causally dependent on \( p_1 \)'s asynchronous blackboard update (step 1 in Figure 5). Thus, messages from the debugger to the blackboard are constrained to be delivered after the message in step 1 is delivered. Since normal execution of the blackboard has been halted, neither message can be delivered. (Note that it was possible for the debugger to halt execution as described because the debugger's first message is not causally related to any activity in the debugged processes. Thus, the debugger's halt message can be delivered to the blackboard before \( p_1 \)'s update message, without violating causality.) Clearly, it would be preferable if messages between the debugger and the debugged processes were completely unrelated to the messages between the debugged processes. Other circumstances where it seems inappropriate to preserve causality between groups include: - **Performance Monitoring.** The issues here are identical to those associated with debugging. - **Out-of-band Communication.** cbcast is a generalization of FIFO message ordering: it prevents "out-of-band" communication. - **Background "bookkeeping" algorithms,** such as garbage collection, deadlock detection, and orphan detection. There is a more general way to look at these examples. Consider a program built of multiple independent subsystems. Each of these subsystems might be composed of several objects, represented by process groups, between which causality should be preserved. Yet, the subsystems may be completely independent of each other, and in some settings (e.g. when an applications combines several subsystems that run at different priorities), the potential delays introduced by the need to enforce inter-group causality would be inappropriate. In the examples above, the debugging and monitoring parts of the application are subsystems that must run at higher priority than the basic graphics application, while bookkeeping operations typically run at a lower priority. 4.4 How visible should causality information be? The examples above argue that the application programmer must have some control over the propagation of causality information. What form should this control take? What granularity of control is required? Other researchers, such as Peterson [PBS89] and Ladin [LLS90], have proposed schemes in which users play a direct role in maintaining, transmitting and reasoning about causality information. Such approaches allow a sophisticated user—or a clever compiler—to exploit application semantics inaccessible to the runtime subsystem. Our approach, in contrast, is motivated by the observation that naive programmers expect causal order to be respected as a matter of course. Indeed, some Isis users employ asynchronous communication without really understanding the causality issue at all. The decision to respect causal order means that such users will be able to develop correct code; a decision not to respect causality would have exposed subtle race conditions. Further, we have observed that although requirements for breaking causal order do arise, they are often related to the existence of sophisticated, independently developed, subsystems. This leads us to favor a declarative approach in which explicit action must be taken to prevent the system from enforcing causal ordering. Our proposal is that groups be created in a specific causality domain. If the domain is not specified, a standard “default” domain would be used. Causality is observed only between groups in the same domain. A causality domain resembles a Psync session [PBS89], but may contain multiple, overlapping process groups. Naive developers would accept this default; thus placing the groups in their applications into a single causality domain. Sophisticated users—such as the author of the debugging package for our task and blackboard graphics application above—would take explicit action to ensure that debugging communication occurs in a separate causality domain. Our emphasis is thus on simplicity of use—at the possible expense of concurrency. We prefer to enforce the occasional spurious causal ordering, rather than requiring that all programmers decide which causal information should be propagated where. The presentation of causality information points to the broader question of how process groups should be presented within programming languages and object oriented environments. Systematic study of these issues will be needed if process groups are to become a common and widely used programming tool. One of us (Cooper) is currently examining these issues in the context of a distributed variant of Concurrent ML [Rep90]. 4.5 Should abcast be ordered between groups? The total order achieved by abcast is used to serialize independent requests to a process group, providing a simple form of mutual exclusion or concurrency control. When groups represent distinct objects, there is generally no need for abcast ordering to be observed at group overlaps (i.e. when two or more objects reside at the same process). Rather, each object is responsible for its own concurrency control (e.g. to maintain one-copy semantics for replicated data), and the object implementations are usually separate and non-interfering. In these cases a single-group abcast will ensure serializability, while the causality semantics of abcast will ensure that the relative ordering of requests at different objects is observed. However these assumptions, while common, do not always hold. An object could be known by more than one group address, or there may be no direct mapping between groups and objects. One example would be overlapping diffusion groups (see Section 2.4) consisting of the same set of server processes, and intersecting sets of clients. One can imagine applications in which abcasts from the servers should be ordered totally at the overlapping client sets. For an abstract example, consider a distributed form of the dining philosopher's problem. For each philosopher there is a process group that includes the pair of forks to use. One might use abcasts to atomically claim or release the forks for a given philosopher. Notice that no two processes (forks) receive the same pair of multicasts. Yet, abcast ordering is important here, because if abcast is not globally ordered, a cyclic request ordering could arise that would cause a deadlock. This example highlights a subtlety with multiple group abcast semantics. There are two reasonable generalizations of single group ordering. In the first, two concurrent abcasts, one to each of two overlapping groups, are ordered totally, but only at the processes in the intersection of the groups. In the second, stronger, definition abcast delivery is globally ordered. The first definition permits cycles in abcast delivery orderings; the second does not [GT90]. While we can create abstract examples to motivate multiple group abcast ordering, we have yet to see practical situations where this kind of ordering is necessary. Further, protocols that provide global order are more costly than protocols that are ordered only within a single group: in the current Isis protocols, a causal, locally ordered abcast is more than twice as fast as the best causal, globally ordered abcast protocol we could devise. This perhaps argues for a notion of ordering domains, analogous to causality domains. For example, one might provide a global abcast order within the subgroups of a hierarchical group, but not between two "unrelated" groups. However, we are unconvinced that ordering domains would see much use. For the moment, we are implementing single group abcast semantics and will re-evaluate this decision in the light of further experience. To summarize: • In most cases, causality should be preserved when a communication chain leaves and re-enters a group. • Causality domains allow the scope of causality obligations to be restricted, in particular for applications with subsystems that must not interfere with one another. • The \textit{abc}ast ordering is normally not needed when multicasts to two different groups happen to overlap. An exception arises when the two groups arise in a single object. Were this common, it would argue for a notion of \textit{ordering domain} similar to the one for causality. 5 \textbf{Extended example: Causal process pairs} To better justify our assertions, we now present a process-pair scheme for fault tolerance designed to be as efficient as possible within our architecture. The example illustrates several points. First, essentially all the issues discussed above arise, and the choices favored in the previous sections lead to simple solutions. Second, the performance of the overall fault-tolerance solution would be quite good – theoretically, as good or better than any previously known solution (we recognize that until we complete a full implementation and compare it directly to an implementation of some other method, this claim lacks the force of an experimental result). Finally, the example demonstrates that our architecture permits an obvious and important problem to be addressed in an elegant way using general primitives, suggesting that hand-crafted solutions to these sorts of problems are not necessarily preferable to solutions layered over a more standard subsystem. We are given a system consisting of processes \{\textit{P}, \textit{Q}, ...\} that communicate by sending point-to-point messages, and we wish to make some of these processes tolerant to single crash failures in a manner that is as transparent as possible to the programmer. This problem has been explored by many researchers and companies [Bar81, BBG+89, SY85, JZ87]. The basic idea of process pairs is to maintain a \textit{backup process} for each \textit{primary process} that we wish to make fault tolerant. The backup process keeps itself synchronized with the primary by keeping a checkpoint of the state of the primary, duplicates of any requests sent to the primary subsequent to the checkpoint, and enough supplemental data to overcome non-determinism in the execution. For each primary process, \textit{P}, let \textit{P'} denote its backup. As illustrated in Figure 6, a process \textit{P} will send a request \textit{r} to the process pair \((\textit{Q}, \textit{Q}')\) by first sending a \textit{trace} message \textit{m} to its backup, and then sending the request, \textit{r}, using a multicast to the pair \((\textit{Q}, \textit{Q}')\). More specifically: Figure 6: Transparent fault-tolerance using causal process-groups. 1. Message $m$ will be sent by $P$ to $P'$, and will contain sufficient trace information to enable $P'$ to reproduce the execution of $P$ up to this point. In the case where $P$ is completely deterministic, $m$ might be empty (in which case the action of sending it can be omitted). Otherwise, it would contain information about the order in which $P$ received and processed requests, the order in which its threads were scheduled, and other sorts of information needed to resolve non-determinism in the execution. If desired, this message can also contain a complete checkpoint of the state of $P$, and indeed it may be desirable to periodically make such a checkpoint to ensure that recovery from failure will incur little delay. 2. Message $r$, which is causally ordered after $m$, contains the request that $P$ is issuing to $Q$. $P$ will send $r$ using an atomic causal multicast to the group $(Q, Q')$. The reply from $Q$ to $P$ is treated in the same manner: the scheme is completely symmetrical with respect to clients and servers. The trace message, $m$, from $P$ to $P'$ indicates the order in which $P$ removed requests from its input queue because $P$ may receive multiple concurrent requests, say from $R$ and $S$. Although these messages will also be sent to $P'$, unless the order of delivery is the same at $P$ and $P'$, $P'$ will not know the order in which $P$ processed them. This information can be omitted from the trace message if a totally ordered multicast is used for all requests. The same discussion applies to the trace message $m'$ sent by $Q$ to $Q'$. To recover from the failure of $P$, process $P'$ will, upon observing the failure event, reconstruct the state that $P$ was in by loading the most recent checkpoint and simulating the computation performed by $P$. This may cause $P$ to send duplicate messages to $Q$, which should detect and discard them (since $P'$ behaves exactly the way that $P$ behaved prior to failing, this can be done by numbering messages consecutively). This scheme introduces two kinds of overhead not present in the original computation: extra messages (between $P$ and $P'$ and between $Q$ and $Q'$), and delay along the critical path of the computation—when failure is rare, this would be the interactions between primaries. The arguments made in Section 2.4, favoring asynchronous, causally ordered communication, apply here. By using causal communication throughout, there will never be any need to delay a message along the critical path (the transmission of $r$ from $P$ to $Q$) because of the messages sent to the backup processes (the transmission of $m$ to $P'$ and the copy of $r$ sent to $Q'$; messages $m'$ and the copy of $r'$ sent to $P'$ are not on the critical path). Given adequate background capacity to send these trace messages and remote (or “backup”) messages, the fault-tolerant version of a computation might actually execute at the same speed as the original one! Moreover, although trace messages and messages to the backup processes do consume bandwidth, they can be delayed and sent in batches, thus pipelining communication and achieving higher efficiency. Although the details will depend on the protocol used, in many situations, the extra messages sent will not impact the performance of the application, provided of course that transmission of messages to backups does not cause congestion at the communication interface. The stability property explained earlier is important, because it defines the limits of allowable asynchrony beyond which safety could be compromised. Specifically, if multicast $a$ causally precedes multicast $b$ and some process that receives $b$ remains operational, a system that implements causal ordering must ensure that $a$ is eventually delivered to all of its destinations (except those that fail). In our application, there is no real limit to the extent to which primaries can run “ahead” of the backups, except for the requirement that this safety condition be maintained. If we represent each process pair $(Q, Q')$ as a process group with two members, this example illustrates the need to preserve causality across the boundaries of process groups. To see this, consider when $P$ sends a trace message $m$ to $P'$ and then sends some request $r$ to $(Q, Q')$. Message $m$ causally precedes $r$, but they are not sent in a single process group. But, if $P$ now fails, we need to know that if $r$ does get delivered, $m$ will also be delivered. Thus, causality across group boundaries prevents a serious potential bug. The example also illustrates the need to communicate to a group from outside it; here, in fact, most communication is originated by external “clients” of the group. Finally, notice that the synchronization of group membership with respect to communication would be needed if one wished to create a new backup after failure of the primary. Although we will not develop the details here, it is interesting to note that the scheme described above is nearly identical to the Tandem process-pair implementation, with the exception of the asynchrony afforded by the causal ordering property. However, our description is more general; for example, it extends without modification to the case of $k$ backups, while the Tandem work is very much tied to the assumption that $k = 1$. Our scheme is also similar to that used in Targen/32 [BBG+89], but uses only a two-way causal multicast, rather than the three-way totally ordered multicast (abcast) they require. Using abcast rather than cbcast for transmission of requests, would eliminate many of the trace messages, but has the potentially serious disadvantage of delaying delivery of messages to the primary, introducing latency on the critical path but simplifying recovery after a failure. This argues in favor of cbcast for transmitting requests to a process pair.\footnote{Readers familiar with the algorithm in [BSS91] will realize that, under this algorithm, the approaches might actually have identical costs. The implementation of (causal) abcast in that paper uses a token holder to decide delivery ordering, and messages are never delayed at the token holder. If the primary member of a process pair is always used as token holder, as would be likely in an implementation of the approach under Isis, the flow of messages resulting from transmission of requests to the pair using abcast is the same as would result when using cbcast with a trace message that informs the backup of the order that was used.} 6 An implementation Many of the foregoing observations and conclusions have been driven not just by usage of the Isis system, but by lessons learned from its implementation. So, while this paper is primarily about the semantics of group-based systems, it is clearly important that the methods we propose correspond to an efficiently implementable system architecture. In fact, our group is presently engaged on the design and implementation of a successor to Isis, called Horus, that will employ the experience gained from the initial system and the observations made above to achieve substantially increased flexibility and performance. Our basic approach is to separate Isis into two parts, one of which would be linked into the application address space, and one residing in the operating system. The operating system module can be made extremely spare, implementing a bare minimum of functionality: virtually synchronous process groups, causal domains, cbcast and abcast – the core functions identified in the discussion above. The remainder of the Isis model and the Toolkit itself would be realized at the library level. It would be beyond the scope of this paper, and somewhat premature, to discuss the design of Horus in greater detail. Completion of a prototype is expected in late 1992, at which time we plan to follow up on the present paper with one giving details and performance. 7 Conclusions Experience with real users can reshape one's perspective on a computer system. This has been the case with the Isis system, which entered into wide academic and commercial use with generally positive but sometimes surprising results. Our experiences support the belief that distributed systems should implement process groups at a basic level. The mechanisms underlying this support need not be as exhaustive as in the present Isis system, which provides a bewildering variety of group membership and multicast ordering options to its users. Our understanding of the system and its users has now reached a point where we can argue that these be reduced to two mechanisms (atomic group membership and causal multicast) over which the virtually synchronous toolkit can be rebuilt. Our paper makes two types of contributions. The first of these is at the level of group structures, particularly by refinement of the notion of group to address issues raised by having multiple groups, groups with external clients, and groups of groups. Our approach recognizes that clients are more numerous than servers, but that their communication patterns and use of group semantics are restricted, and it organizes groups into causal domains. We expect these styles of client-server groups to be durable because they are directly based on uses observed in practice. Although new group and multicast protocols are to be expected, these group structures should continue to present programmers with the interface they actually need. Our second major contribution is the argument that asynchronous communication, combined with failure atomicity and causal ordering, is faster than synchronous request-response communication, and is sufficient for most communication needs. Although a total ordering is sometimes necessary, such ordering imposes unavoidable delays and should be implemented on top of a causal communication primitive. Our new virtual synchrony architecture retains some of the complexity for which the original Isis system can be criticized. We believe that this is acceptable for two reasons. First, we see no way to further simplify the system without breaking important properties. Additionally, the elegance of the fault-tolerance transformation stands as evidence that the approach does result in simple solutions to important distributed computing problems. Thus, although the rationale of the architecture and the details of its implementation may continue to mystify non-experts, users of the system will find these concerns unimportant because it substantially simplifies their work. Just as the obscure details of register scheduling in an optimizing compiler or concurrency control in a database system do not prevent us from using these technologies, we believe that programmers of the next generation of distributed applications will leave the details of communication to the operating system – and will be far more productive for having done so. 8 Acknowledgements The material presented here was arrived at through discussions with many others. We thank Micah Beck, Tushar Chandra, Rich Draves (CMU), Brad Glade, Keith Marzullo, Doug Orr (Chorus), Franklin Reynolds (OSF), Mark Rozier (Chorus), Fred Schneider, Pat Stephenson, Robbert Van Renesse, and Mark Wood. Our architecture was also influenced by the work of Franz Kaashok (Vrije), Paulo Veríssimo (INESC), and by the ANSA project. And we thank Maureen Robinson for producing the figures. References
{"Source-Url": "https://ecommons.cornell.edu/bitstream/handle/1813/7097/91-1257.pdf;jsessionid=3319842F20C9CA4177AA6A661A6CA42E?sequence=1", "len_cl100k_base": 14382, "olmocr-version": "0.1.53", "pdf-total-pages": 33, "total-fallback-pages": 0, "total-input-tokens": 37719, "total-output-tokens": 17306, "length": "2e13", "weborganizer": {"__label__adult": 0.0003132820129394531, "__label__art_design": 0.0004494190216064453, "__label__crime_law": 0.0003001689910888672, "__label__education_jobs": 0.0009603500366210938, "__label__entertainment": 0.00010025501251220704, "__label__fashion_beauty": 0.00016367435455322266, "__label__finance_business": 0.0003905296325683594, "__label__food_dining": 0.0003364086151123047, "__label__games": 0.0006799697875976562, "__label__hardware": 0.001850128173828125, "__label__health": 0.0006108283996582031, "__label__history": 0.0004305839538574219, "__label__home_hobbies": 0.00012135505676269533, "__label__industrial": 0.0005660057067871094, "__label__literature": 0.00034046173095703125, "__label__politics": 0.0003228187561035156, "__label__religion": 0.0005621910095214844, "__label__science_tech": 0.14111328125, "__label__social_life": 9.268522262573242e-05, "__label__software": 0.014251708984375, "__label__software_dev": 0.8349609375, "__label__sports_fitness": 0.0002894401550292969, "__label__transportation": 0.0007424354553222656, "__label__travel": 0.00023853778839111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 77662, 0.02517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 77662, 0.31038]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 77662, 0.93085]], "google_gemma-3-12b-it_contains_pii": [[0, 481, false], [481, 481, null], [481, 2601, null], [2601, 5553, null], [5553, 8149, null], [8149, 10302, null], [10302, 13211, null], [13211, 15743, null], [15743, 19079, null], [19079, 20874, null], [20874, 23334, null], [23334, 25902, null], [25902, 28295, null], [28295, 31060, null], [31060, 32752, null], [32752, 35322, null], [35322, 38665, null], [38665, 42087, null], [42087, 44789, null], [44789, 47024, null], [47024, 48306, null], [48306, 49980, null], [49980, 52660, null], [52660, 55332, null], [55332, 58414, null], [58414, 61173, null], [61173, 62827, null], [62827, 66136, null], [66136, 69154, null], [69154, 72130, null], [72130, 74224, null], [74224, 76495, null], [76495, 77662, null]], "google_gemma-3-12b-it_is_public_document": [[0, 481, true], [481, 481, null], [481, 2601, null], [2601, 5553, null], [5553, 8149, null], [8149, 10302, null], [10302, 13211, null], [13211, 15743, null], [15743, 19079, null], [19079, 20874, null], [20874, 23334, null], [23334, 25902, null], [25902, 28295, null], [28295, 31060, null], [31060, 32752, null], [32752, 35322, null], [35322, 38665, null], [38665, 42087, null], [42087, 44789, null], [44789, 47024, null], [47024, 48306, null], [48306, 49980, null], [49980, 52660, null], [52660, 55332, null], [55332, 58414, null], [58414, 61173, null], [61173, 62827, null], [62827, 66136, null], [66136, 69154, null], [69154, 72130, null], [72130, 74224, null], [74224, 76495, null], [76495, 77662, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 77662, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 77662, null]], "pdf_page_numbers": [[0, 481, 1], [481, 481, 2], [481, 2601, 3], [2601, 5553, 4], [5553, 8149, 5], [8149, 10302, 6], [10302, 13211, 7], [13211, 15743, 8], [15743, 19079, 9], [19079, 20874, 10], [20874, 23334, 11], [23334, 25902, 12], [25902, 28295, 13], [28295, 31060, 14], [31060, 32752, 15], [32752, 35322, 16], [35322, 38665, 17], [38665, 42087, 18], [42087, 44789, 19], [44789, 47024, 20], [47024, 48306, 21], [48306, 49980, 22], [49980, 52660, 23], [52660, 55332, 24], [55332, 58414, 25], [58414, 61173, 26], [61173, 62827, 27], [62827, 66136, 28], [66136, 69154, 29], [69154, 72130, 30], [72130, 74224, 31], [74224, 76495, 32], [76495, 77662, 33]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 77662, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
161cf79e2f887d22bbe4b1b2b4393bd01772e7df
Proceedings of the Ninth International Workshop on Graph Transformation and Visual Modeling Techniques (GT-VMT 2010) Preserving constraints in horizontal model transformations Paolo Bottoni, Andrew Fish, Francesco Parisi Presicce 14 pages Preserving constraints in horizontal model transformations Paolo Bottoni¹, Andrew Fish², Francesco Parisi Presicce¹ ¹ Dipartimento di Informatica, "Sapienza" Università di Roma, Italy, ²Computing, Mathematical and Information Sciences, University of Brighton, UK Abstract: Graph rewriting is gaining credibility in the model transformation field, and tools are increasingly used to specify transformation activities. However, their use is often limited by special features of graph transformation approaches, which might not be familiar to experts in the modeling domain. On the other hand, transformations for specific domains may require special constraints to be enforced on transformation results. Preserving such constraints by manual definition of graph transformations can be a cumbersome and error-prone activity. We explore the problem of ensuring that possible violations of constraints following a transformation are repaired in a way coherent with the intended meaning of the transformation. In particular, we consider the use of transformation units within the DPO approach for intra-model transformations, where the modeling language is expressed via a type graph and graph conditions. We derive additional rules in a unit from a declarative rule expressing the principal objective of the transformation, so that the constraints set by the type graph and the graph conditions hold after the application of the unit. The approach is illustrated with reference to a diagrammatic reasoning system. Keywords: DPO, automatic generation, model transformation. 1 Introduction Graph rewriting-based tools are increasingly used in the field of model transformation. However, their use is often limited by the special features of the different graph transformation approaches, which might not be familiar to experts in the modeling domain. On the other hand, transformations for specific domains may require constraints to be enforced on the results of the transformation. In this paper we explore the problem of ensuring that possible violations of constraints are managed in a way coherent with the intended meaning of the transformation. We consider horizontal (or in-place) model transformations which destructively update a model expressed in a given language, for the case where the modeling language is expressed via a type graph and a set of graph conditions. In particular, we study transformations in reasoning processes deriving inferences via logical steps creating or deleting model elements. While modelers are generally clear on what they want to achieve by defining a transformation, the evaluation of all of its consequences may be complex, and the definition of the implied preserving or enforcing actions cumbersome and error-prone. We propose an approach to the automatic construction of transformation units achieving the effect of an intended model transformation while ensuring that all conditions are satisfied at the end of the unit if they held at its start. We consider transformations consisting of the creation or deletion of elements of a specific type, expressed as principal declarative rules. As their appli- Preserving constraints cation may violate some conditions, they have to be applied in a proper (condition preserving) context, or (condition enforcing) repair actions have to be taken to restore the satisfaction of such conditions. Hence, additional rules are defined, derived from the principal one and the conditions to be enforced. The approach is illustrated with reference to a diagrammatic reasoning system. Paper organisation. Section 2 discusses related work on constraint preservation in graph transformation, and Section 3 provides the relevant formal notions. Section 4 introduces Spider Graphs (SGs) as running example, before presenting the approach in Section 5 and applying it to SGs in Section 6. Finally, Section 7 draws conclusions and points to future developments. 2 Related work Rensink and Kuperus have exploited the notion of nested graphs to deal with the amalgamated application of rules to all matches of a rule. In [RK09], they define a language to specify nested graph formulae. A match can be found from a nested graph rule to a graph satisfying a formula, according to a given morphism, and the application of a composite rule ensues. Their approach is focused on avoiding control expressions when all the matches of a rule have to be applied, while we focus here on preserving constraints with reference to a single match. Bottoni et al. have defined methods to extend single declarative rules for model transformation so that they comply with specific patterns defining consistency of interpretation in triple graphs [BGL08]. They define completions of single rules with respect to several patterns, while we are interested here in constructing several rules, navigating along different sets of constraints. Taentzer et al. have proposed the management of inconsistencies among different viewpoints of a model in distributed graph rewriting. For example, the resolve strategy requires the definition of the right-hand sides of rules to be applied when the left-hand side identifying the inconsistency is matched [GMT99]. The detection of inconsistencies between rules representing different model transformations has been attacked by static analysis methods in [HHT02]. Similarly, Münch et al. have added repair actions to rules in case some post-conditions are violated by rule application [MSW00]. In all these cases, actions were modeled through single rules. Habel and Pennemann [HP09] unify theories about application conditions from [EEHP06] and nested graph conditions from [Ren04], lifting them to high-level transformations. They transform rules to make them preserve or enforce both universal and existential conditions. Their approach leads to the generation of a single rule incorporating several application conditions derived from different conditions with reference to the possible matches of the rule on host graphs. In his dissertation [Pen09], Pennemann expands on the topic, also introducing programs with interfaces, analogous to transformation units, but allowing passing of matches. In [OEP08], Orejas et al. define a logic of graph constraints to allow the use of constraints for language specification, and to provide rules for proving satisfaction of clausal forms. The idea of introducing basic rules derived from entities and associations defined in a meta-model is exploited in [BQV06] to define constraints on the interactive composition of complex rules, by allowing their presence in the rule left or right-hand sides only in accordance with their roles in the meta-model, where only the abstract syntax is taken as a source of constraints. Ehrig et al. describe a procedure, exploiting layers, which derives a grammar to generate (rather than transform) instances of the language defined by a meta-model with multipli- ties [EKTW06]. Satisfaction of OCL constraints is checked a posteriori on a generated instance. 3 Background For a graph $G = (V(G), E(G), s, t)$, $V(G)$ is the set of nodes, $E(G) \subset V(G) \times V(G)$ the set of edges and $s, t : E \rightarrow V$ the source and target functions. In a type graph $TG = (V_T, E_T, s^T, t^T)$, $V_T$ and $E_T$ are sets of node and edge types, while $s^T : E_T \rightarrow V_T$ and $t^T : E_T \rightarrow V_T$ define source and target node types for each edge type. $G$ is typed on $TG$ via a graph morphism $type : G \rightarrow TG$, where $type_V : V \rightarrow V_T$ and $type_E : E \rightarrow E_T$ preserve $s^T$ and $t^T$, i.e. $type_V(s(e)) = s^T(type_E(e))$ and $type_E(t(e)) = t^T(type_E(e))$. $|V(G)|$ is the number of nodes of type $t \in V_T$ in $G$. A DPO rule [EEPT06] consists of three graphs: left- and right-hand side ($L$ and $R$) and interface graph $K$. Two morphisms$^1$ $l : K \rightarrow L$ and $r : K \rightarrow R$ model the embedding of $K$ (containing the elements preserved by the rule) in $L$ and $R$. Figure 1 shows a DPO direct derivation diagram. Square (1) is a pushout (i.e. $G$ is the union of $L$ and $D$ through their common elements in $K$), modeling the deletion of the elements of $L$ not in $K$, while pushout (2) adds the new elements, i.e. those present in $R$ but not in $K$. Figure 1 also illustrates the notion of negative application condition (NAC), as the association of a set of morphisms $n_i : L \rightarrow N_i$, also noted $\text{NAC} \vdash n$ $L$, with a rule. A rule is applicable on $G$ through a match $m : L \rightarrow G$ if there is no morphism $q_i : N_i \rightarrow G$, with $N_i$ in NAC, commuting with $m$ (i.e. $q_i \circ n_i = m$). We exploit the partial order $\leq$ induced, up to isomorphisms, by monomorphisms on the set of graphs, i.e. $g_1 \leq g_2 \Leftrightarrow \exists m : g_1 \rightarrow g_2$. ![Figure 1: DPO Direct Derivation Diagram for rules with NAC.](image) Graph conditions allow the specification of models by forbidding the appearance of certain subgraphs, or by enforcing others to appear in given contexts. We use here a class of conditions $\mathcal{O}$ similar to those in [HP09], where a condition over a graph $A$ is either of the form $\text{true}$ or of the form $\exists(a, q)$, with $a : A \rightarrow Q$ a morphism from $A$ to some graph $Q$ and $q$ a condition over $Q$. Conditions are also obtained by using the Boolean connectives $\neg$ and $\lor$, and can be written in the form $\forall(a, q)$, equivalent to $\neg\exists(a, \neg q)$. We assume that all conditions in a set $\Theta \subset \mathcal{O}$ differ for the $a$ morphism, so that $(a_1, q_1), (a_2, q_2) \in \Theta \Rightarrow (A_1 \not\leq A_2) \lor (Q_1 \not\leq Q_2)$. We will also use the short forms $\exists(Q)$ for $\exists(a : \emptyset \rightarrow Q, \text{true})$ and $\exists(Q)$ for $\neg\exists(a : \emptyset \rightarrow Q, \text{true})$. We restrict here to positive conditions of types $\exists(Q)$ or $\forall(a : \emptyset \rightarrow Q, q)$, noted $\forall(Q, q)$ with $q = \bigvee_{j \in J} q_j : Q \rightarrow W_j$ a disjunction of existential conditions. In this case, all the conditions of the form $\exists(Q_i) \in \Theta$ can be collapsed into a single condition $\exists(Q)$, with $Q$ the colimit of all $Q_i$ on the diagram constructed with all pairwise maximal common subgraphs. Simple negative conditions have the form $\exists(Q)$. **Definition 1** Given a graph $G$, we say: $^1$ In this paper, when we speak of morphisms, we will always consider them injective. • A morphism \( m : X \rightarrow G \) satisfies a condition \( C \), \( (m \models C) \), iff one of the following holds: 1. \( C = \text{true} \). 2. \( C = \exists(Y) \) and \( Y \leq X \). 3. \( C = \forall(X, q) \) and \( \exists m_j : m(X) \rightarrow W_j \) s.t. \( q_j = m_j \circ m \) for some \( q_j \). 4. \( C = \exists(Y) \) and \( Y \not\leq X \). 5. \( C = C_1 \lor C_2 \) and \( m \models C_1 \) or \( m \models C_2 \). • A graph \( G \) satisfies \( C \) (\( G \models C \)), iff one of the following holds: 1. \( C = \text{true} \). 2. \( C = \exists(Y) \) and there exists \( m : Y \rightarrow G \) s.t. \( m \models C \). 3. \( C = \forall(X, q) \) and for each \( m : X \rightarrow G \), \( m \models C \). 4. \( C = \exists(Y) \) and there is no morphism \( m : Y \rightarrow G \). 5. \( C = C_1 \lor C_2 \) and \( G \models C_1 \) or \( G \models C_2 \). We say that a graph \( G \) typed on \( TG \) is a model for \( \Theta \), noted \( G \models \Theta \), if for each \( C_i \in \Theta \), \( G \models C_i \). We assume \( \Theta \) to be a consistent set of conditions, whose models are finite non-empty graphs; in particular, simple graphs, with no two instances of the same edge type between two nodes. Transformation units control rule application through control words over rule names [KKS97]. Given: 1) \( \mathcal{G} \) the class of typed graphs; 2) \( \mathcal{R} \) the class of DPO rules with NACs on \( \mathcal{G} \); 3) \( \rho \) the DPO derivation relation; 4) \( \mathcal{E} \) a class of graph expressions (here defined by type graphs and graph conditions), where the semantics of an expression \( e \) is a subclass \( \text{sem}(e) \subset \mathcal{G} \); 5) \( \mathcal{W} \) a class of control words over identifiers of rules in \( \mathcal{R} \) exploiting single rules, the sequential construct `;`, the iteration construct \( w^* \), with \( w \in \mathcal{W} \), the alternative choice `|`; a transformation unit is a construct \( TU = (e_1, e_2, P, \text{imp}, w) \), with \( e_1, e_2 \in \mathcal{E} \) initial and terminal graph class expressions, \( P \subset \mathcal{R} \) a set of DPO rules, \( \text{imp} \) a set of references to other, imported, units, whose rules can be used in the current one, and \( w \in \mathcal{W} \) a control word enabling rules from \( P \), and units from \( \text{imp} \), to be applied. TUs have a transactional behaviour, i.e. a unit succeeds iff it can be executed according to the control condition; it fails otherwise. The semantics of a \( TU \) is the set \( \text{sem}(TU) = \{(g_1, g_2) \mid g_1 \in \text{sem}(e_1), g_2 \in \text{sem}(e_2), g_1 \overset{TU}{\Rightarrow} g_2 \} \), where \( \downarrow \) indicates successful termination. 4 A Running Example: Spider Diagrams and Spider Graphs Spider Diagrams are a reasoning system based on Euler diagrams. Several variants exist, differing in syntax and semantics [HMT+01]. We adopt a simplified version, based on Venn, rather than Euler, diagrams and omitting shading and strands. We first provide an indication of the concrete syntax of the diagrams and an informal semantics. Then we propose a graph-based abstract model for them, called Spider Graphs, which differs from the usual algebraic abstract models and is in fact slightly closer to the concrete model, even modelling spider’s feet. Let \( C = \{C_1, \ldots, C_n\} \) be a collection of simple closed curves in the plane with finitely many points of intersection between curves. A zone is a region of the form \( X_1 \cap \cdots \cap X_n \), where \( X_i \in \) \( \{ \text{int}(C_i), \text{ext}(C_i) \} \), the interior of \( C_i \) or the exterior of \( C_i \), for \( i \in \{1, \ldots, n\} \). If each of the \( 2^n \) possible zones of \( C \) are non-empty and connected then \( C \) is a Venn diagram (see [Rus97] for more details). Each zone \( z \) defines a unique partition of the set \( C \), according to whether \( z \) is inside or outside a curve. Two zones are called twins if their inside and outside relations are switched for exactly one curve. In this paper, a Spider Diagram is a Venn diagram whose curves are labelled, together with extra syntax called spiders, which are trees whose vertices (called feet) are placed in unique zones. The set of zones containing a spider’s feet is called its habitat. Special arcs, called ties, can be drawn between feet of different spiders in the same zone. Intuitively, each curve represents a given set (indicated by the label) and each zone represents some set intersection. A spider indicates the existence of an element within the set determined by its habitat, whilst a tie between a pair of feet of different spiders within a zone indicates equality of elements, if both spiders represent an element in the set represented by the zone. Figure 2 (left) shows an example of a Spider Diagram, with two curves \{\( A, B \)\} and four zones described by \{\( (\{A\}, \{B\}), (\emptyset, \{A, B\}), (\{B\}, \{A\}), (\{A, B\}, \emptyset) \}\}. Here, these zones are the four minimal region of the plane determined by the curves; for example, the zone described by \( (\{A\}, \{B\}) \) is the region \( \text{int}(A) \cap \text{ext}(B) \) which is inside \( A \) but outside \( B \). The habitat of spider \( s \) is the set of zones \( \{ (\{A\}, \{B\}), (\{A, B\}, \emptyset) \} \), while that of \( t \) is the singleton \( \{ (\{A, B\}, \emptyset) \} \). Informally, the diagram semantics is: there are two sets \( A \) and \( B \), there exists an element named \( s \) in \( A \) and an element named \( t \) in \( A \cap B \). Moreover, if \( s \) is in \( A \cap B \) then \( s = t \). We provide here an abstract graph-based model of a Spider Diagram, called a Spider Graph, not taking into account its concrete geometry. Since we are interested here only in syntactic aspects, we do not consider the labeling of the curves. We obtain the type graph of Figure 3 (left), where nodes represent the diagram elements Curve, Foot, Spider and Zone, and edges represent relations between them. A twin edge indicates that two zones are twins w.r.t. some curve and an inside/outside edge indicates whether a curve contains/excludes a zone, respectively. In Figure 2 (right) the Spider Graph associated with the Spider diagram on the left is shown. The names of the nodes show the correspondence with the objects in the diagram. We have two curve nodes in each possible relation with four zones\(^2\). For ease of reading, the zone nodes are given names consisting of a list of the lower case letters corresponding to the upper case letters used as names of the curves the zones are inside, and we use \( O \) for the name of the node corresponding to the zone outside all curves in the diagram. Zone node pairs \( ab \) and \( b \), and \( O \) and \( a \) are twinned due to curve \( A \), whilst \( ab \) and \( a \), and \( O \) and \( b \) are twinned due to curve \( B \). We now present the conditions completing the definition of the class of Spider Graphs. Fig- \(^2\) To keep the graph simple, we have omitted the outside edges, which are complementary to the inside ones. Preserving constraints Figure 3: The type graph (left) and negative conditions (right) for Spider Graphs. Figure 3 (right) shows a set of conditions of the form $\neg Q$, presented as forbidden graphs. They prevent duplication or inconsistency of information and state the uniqueness of relations between zones and curves. Moreover, we assume the existence of all negative conditions forcing the graphs to be simple. We omit the direction of edges and their labels, when understood from the type graph, and use the abbreviations $i$ and $o$ for the inside/outside case. The remaining conditions force the existence of a partition of the set of curves for all zones, and require suitable contexts for zones and feet. We present them adopting a visual syntax where a condition $\exists(a : A \rightarrow Q, q)$ is represented by a box, separated into two parts by a horizontal line, with the top part containing a depiction of the morphism $a$ and the bottom part containing a box depicting the condition $q$ on $Q$. An empty bottom box corresponds to true. Each condition box has an external tab containing either quantifier information or the boolean connective $\lor$, $\land$ or $\neg$. As we use conditions with $A = \emptyset$, we only present $Q$ and we do not repeat $Q$ in the depiction of $q$. Numbers indicate identification in the morphisms, while not numbered nodes indicate a hidden existential quantification, as usual. Edges between identified nodes are also assumed to be identified in the morphisms. The class of Spider Graphs is the intersection of the languages defined by the type graph and the negative conditions of Figure 3, and the positive conditions in Figures 4 to 6. Figure 4: Conditions on single elements. Reasoning rules are derived on top of the algebraic abstract models for Spider diagrams. These are syntactic transformations whose application corresponds to logical deduction, according to the semantics. They are usually specified by complex algorithmic procedures, during which the intermediate diagrams may not be logical consequences of the premise diagram, with pre and post conditions taking into account the stated semantics of the diagram. For instance a rule to add a new curve must split every zone into two zones, one inside and one outside each existing zone, as well as duplicating spider’s feet in zones. Whereas the first effect derives from the syntactical conditions, the second is a semantic aspect. 5 Condition preserving rules We discuss the derivation of a condition-preserving transformation unit $TU_t^g$ for the generation of an element of type $t$. The initial and terminal expressions $e_1$ and $e_2$ for $TU_t^g$ define the class of graphs typed on $TG$ and satisfying $\Theta$. $TU_t^g$ is associated with the execution of $r : \emptyset \leftarrow \emptyset \rightarrow \{t\}$ and is constructed so that given a graph $G \in \text{sem}(e_1)$, for $G \xrightarrow{TU_t^g} H$, $(G,H) \in \text{sem}(TU_t^g)$, and $G \leq G + [t] \leq H$, where $+$ indicates the pushout along the empty subgraph. Note that in general $G + [t] \not\models \Theta$, but $G + [t] \models \Theta'$ for some $\Theta' \subset \Theta$. Hence, we admit that some conditions may not be satisfied at intermediate steps of the unit application, and define an operational class in which to perform transformations. Graphs in this class satisfy a subset of the graph conditions and may be typed on some $TG'$ with additional types and edges w.r.t. $TG$. In particular, we use here the subset $\Theta'$ containing $\exists(Q)$ and all the conditions $\forall(Q_i)$ in $\Theta$. Before presenting the algorithm, we give its rationale. We only have to consider universal and negative existential conditions, as positive existential conditions cannot be violated by adding an element. However, adding $[t]$ produces a graph $G + [t]$ which may not satisfy $\Theta$ in two ways: either it contains a forbidden subgraph, or it provides a new match for the premise of a universal condition, but it fails to satisfy the conclusion. \footnote{Here and in the rest of the paper, $[t]$ denotes the graph consisting of a single node of type $t$.} To solve the first problem, given⁴ a rule \( r : L \rightarrow R \) in \( TU^t_\mathcal{G} \) (including \( r : \emptyset \rightarrow \overline{\emptyset} \)), for each condition \( \overline{\emptyset}(X) \in \Theta \), the function \( \text{genNAC}(r,X) \) adds to \( r \) the set of NACs formed according to the construction in Figure 7 (left). Here \( M_j \) is a maximal common subgraph of \( R \) and \( X \) and \( M'_j \) is a maximal common subgraph to \( M_j \) and \( L \), s.t. all the squares are pushouts. Hence, \( L \rightarrow X'_j \leftarrow X_j \) is the pushout for \( L \leftarrow M'_j \rightarrow X_j \), with the second morphism given by arrow composition. The set of NACs contains all the morphisms \( a'_j : L \rightarrow X'_j \) preserving the image of \( L \) in \( X_j \). This prevents the application of \( r \) on a match which could create the forbidden subgraph \( X \) (see [HHT96]). \[ \begin{array}{c} M'_j \rightarrow M_j \rightarrow X \end{array} \] \[ \begin{array}{c} L \xrightarrow{r} R \end{array} \] \[ \begin{array}{c} X_j \end{array} \] \[ \begin{array}{c} X'_j \end{array} \] \[ \begin{array}{c} M_h \rightarrow L \rightarrow X \rightarrow R \end{array} \] \[ \begin{array}{c} R_h \end{array} \] \[ \begin{array}{c} r_h \end{array} \] \[ \begin{array}{c} L_h \end{array} \] \[ \begin{array}{c} M_h \end{array} \] Figure 7: Constructing NAC (left) and incorporating available context (right). To solve the second problem, given a (universal) condition \( C = \forall(q, \bigvee_{j \in J} q_j : Q \rightarrow W_j) \), s.t. \( \exists \leq Q \), the function \( \text{genUniRules}(C) \) produces the set of rules \( R(C) \) where each rule has the form \( \text{NAC}(C) \xrightarrow{r} Q \xrightarrow{\overline{r}} W_j \). \( TU^t_\mathcal{G} \) will contain an alternative choice among these rules, produced by the function \( \text{alt}(R(C)) \). In order to prevent these rules from being applied indefinitely in case of iteration on the choice, \( \text{NAC}(C) \) contains a copy of each \( W_j \) so the same match is not reused twice. Intuitively, these rules adjust the relations of the newly added element w.r.t. the contexts defined in their premises. However, several aspects have to be taken into account. For example, consider conditions \( C2 \) in Figure 4 and suppose we want to add a Spider. Then, the derived rule will have to create a Foot (condition \( C2 \)), but this will require a Zone (condition \( C3 \)), which will require a Curve (condition \( C4 \)), hence other additional Zones (conditions \( C8 \) and \( C9 \)), with several relations to other curves and zones (conditions \( C10 \) – \( C12 \)). On the other hand, a Zone for a Foot is already guaranteed to be present by \( C1 \), so that one can reuse existing context to satisfy this. To deal with such situations, given a rule \( r : L \rightarrow R \) and a context \( X \) to be reused (more on this later), the function \( \text{reuseContext}(r,X) \) produces a collection of rules of the form \( r_h : L_h \rightarrow R_h \) according to the construction in Figure 7 (right). Here, \( L \rightarrow L_h \leftarrow X \) is the pushout along a maximal common subgraph \( M_h \) of \( L \) and \( X \) and \( X \rightarrow R_h \leftarrow R \) is the pushout of \( X \leftarrow M_h \rightarrow R \). In general, one wants to obtain a \( TU^t_\mathcal{G} \) which, after applying \( r : \emptyset \rightarrow \overline{\emptyset} \) to \( G \), proceeds through the following abstract steps, so that context is progressively constructed for the next step. 1. define all edges between the added node and existing nodes of \( G \) as required by conditions; 2. generate new nodes as required by the conditions; 3. generate all edges for the new nodes, as required by the conditions. For example, when adding a Curve, one has to: 1) define relations between the new curve and existing zones; 2) create new zones, while defining relations with the new curve; 3a) establish relations between new zones and existing curves; 3b) establish relations between zones. ⁴ Where not needed, we will omit \( K \). Two things have to be considered. In general, satisfaction of $\forall(Q, q)$ requires iterating through all possible matches for $Q$. However, when $Q$ consists of just one node, no iteration is necessary, and if $Q$ is the graph $\overline{T}$ the derived rule has to be applied only to the newly added node, as it is already satisfied for the nodes of type $t$ which were in $G$ originally. Hence, we extend $TG$ to admit a special type of loop edge: the first rule is changed to $r : \emptyset \to \overline{T}$, where $\overline{T}$ designates a node with a marker loop. For a rule\footnote{For each function operating on rules or types we overload the symbol to accept as argument sets.} $r : L \to R$, the function $\text{mark}(r)$ produces a set $P_r^t = \{r_h^t : L_h^t \to R_h^t \mid h : \overline{T} \to L\}$ where $L_h^t$ and $R_h^t$ are obtained by adding the loop to the images $h(L)$ and $r \circ h(L)$, the immersions $m_h : L \hookrightarrow L_h^t$ and $m_h^t : R \to R_h^t$ preserve such images, and $r_h^t$ is the unique morphism s.t. $L_h^t \xrightarrow{r_h} R_h^t \xrightarrow{m_h^t} R$ is the pushout of $L_h^t \xrightarrow{m_h} L \xrightarrow{r} R$. $TU_r^t$ will apply $r$ or rules from $P_r^t$ in different situations. The rule $\text{delLoop} : \overline{T} \to \overline{L}$ will conclude $TU_r^t$ deleting the loop. Moreover, as in the examples above, some rules create new nodes if they cannot be provided by the context, and so conditions relative to the new nodes have to be satisfied. This potentially creates a situation in which an infinite recursion might start. To avoid this, we study the relations between types for which conditions are mutually recursive. In our example one such pair consists of $\text{Curve}$ and $\text{Zone}$. Indeed, generating a curve implies the generation of a collection of zones, whilst the generation of a zone can imply the generation of a single curve and of the collection of zones related to the new curve: we need to distinguish between situations in which context, enriched with the new node which has started the process, has to be reused, and those in which a new node is needed to provide the correct context. Definition\footnote{Note that $Q_2(t) = \{3\overline{Q}\}$ if $\overline{T} \leq \overline{Q}$, and $Q_2(t) = \emptyset$ otherwise.} 2 provides the needed notation. **Definition 2** Let $t \in V_T$ be a type and $Q(t) \subset \Theta$ the set of conditions of the form $Op(a : A \to Q, q)$, for $Op \in \{\exists, \forall, \exists\}$ s.t. $\overline{T} \leq Q$ (i.e. a node of type $t$ appears in $Q$). $\{Q_1(t), Q_2(t), Q_3(t)\}$ is a partition of $Q(t)$ into existential\textsuperscript{6}, universal and negative existential conditions for $t$, respectively. $V_T^3 = \{t \mid \overline{T} \leq \overline{Q}\}$ is the set of existentially quantified types. A partial order $\leq_c$ is induced on $Q_3(t)$ by $(C_1 \prec C_2) \iff ((A_1 \prec A_2) \lor (A_1 \simeq A_2) \land (Q_1 \prec Q_2)))$. $\text{DAG}(t) = (Q(t), \prec, s, t)$ is the directed acyclic graph induced on $Q(t)$, where $(q_1, q_2) \in \prec \iff q_1 \prec_c q_2 \land \exists q_3, s.t. q_1 \prec_c q_3, q_3 \prec q_2$. We call $\text{Min}(t)$ the set of minimal models for $\Theta \cup \{3\overline{Q} + \overline{T}\}$ for $t \in V_T \subset V_T^3$ and $\text{MIN}(S)$ the set of minimal models for $\Theta \cup \bigcup_{t \in S \subset V_T} \{3\overline{Q} + \overline{T}\}$. For each condition $C \in Q_3(t)$ the rules in $\text{genUniRules}(C)$ will be applied in an order established by a function $\text{visit}(\text{DAG}(t))$ which starts from initial nodes and proceeds from a join node only after all its incoming paths have been visited. In this way, progressively increasing contexts will have been produced, possibly providing new matches for the subsequent rules. In order to follow the abstract steps discussed above, for a type $t$ we organize the rules derived from $Q_3(t)$ into layers: $\text{LAYER}_1(t)$ contains rules which only add edges touching nodes of type $t$, $\text{LAYER}_2(t)$ contains rules which add at least one node (of any type) in a non-empty context (and possibly edges of any type), whilst $\text{LAYER}_3(t)$ contains rules which do not create nodes but add edges of any type, but with at least one edge between instances of some type other than $t$. The sets $\text{Min}(t)$ provide context which is certainly present if a unit for the addition of an element of type $t$ has already been applied, while $\overline{Q}$ is guaranteed to be always present. Hence, $\text{reuseContext}$ will be invoked with parameter $X$ equal to $\overline{Q}$ or $\text{Min}(t)$, depending on the situation. Moreover, if an element of type $t'$ is created as a consequence of the generation of \( \bar{\mathcal{T}} \) rules derived from visiting $\mathcal{DAG}(t')$ have also to be applied, in the context provided by the already applied rules. Hence, we introduce a notion of domination and a predicate $\text{dominates}(t,t') \equiv \mathcal{DAG}(t') \leq \mathcal{DAG}(t)$. Figure 8 shows the DAGs for the example introduced in Section 4. When adding a new zone, as we have $\text{dominates}$(Zone, Curve), the construction of $\mathcal{T} \mathcal{U} \mathcal{G}_{\text{curve}}$ should recursively be invoked. But then, rules from $\mathcal{Q}_\gamma$(curve) would create new zones, thus requiring the invocation of rules from $\mathcal{Q}_\gamma$(zone), etc. Hence, in the context of the construction of $\mathcal{T} \mathcal{U} \mathcal{G}_{\text{curve}}$ if $\mathcal{DAG}(t') \leq \mathcal{DAG}(t)$, then the rules from the conditions in $\mathcal{Q}_\gamma(t')$ are generated used via \text{reuseContext}, with $X = \text{MIN}(t')$, to take into account that the minimal context for $t'$ can already exist. Also, a function $\text{create}(r)$ returns the set of types produced by $r$, i.e. in $V_T(R) \setminus V(L)$. ![Figure 8: The DAGs for Spider Graphs.](image) The resulting algorithm $\text{CreateGenUnit}(t)$ populates $\mathcal{T}_G$, with rules derived from $\mathcal{Q}_\gamma(t)$, with added NACs to preserve conditions in $\mathcal{Q}_3(t)$, and organizes them according to ordering and layering: rules are applied only when context for their application is ready. **Algorithm** $\text{CreateGenUnit}(t; \text{type}) : \mathcal{T}_G$ 1. **initialize** $\text{UNIT}$ with $r^t_i : \emptyset \rightarrow [\mathcal{T}]$; 2. **foreach** condition $C = \forall (Q,q) \in \Theta$ do \{ $R(C) = \text{genUniRules}(C)$; \} 3. return $\text{RecursiveGen}(t, \emptyset, \text{false})$; **Algorithm** $\text{RecursiveGen}(t; \text{type}, S; \text{setOfTypes}, \text{inner}; \text{boolean}) : \mathcal{T}_G$ 1. $\text{path} = \text{visit}(\mathcal{DAG}(t)); X = \emptyset; \text{aux} = \emptyset$; 2. if isEmpty($S$) then \{ $\text{if} t \in V_T \setminus V_T^2$ then \{ $X = \overline{Q}$; \} else \{ $X = \text{MIN}(S)$; \} \} 3. **foreach** condition $C = \forall (Q, \bigvee_{j \in J} q_j : Q \rightarrow W_j) \in \text{path}$ do \{ 1. **foreach** $k \in \{1, \ldots, 3\}$ do \{ 2. **foreach** $t' \in S$ \{ if dominates($t,t'$) \{ $\text{aux} = \text{aux} \cup \{t'\}$ \} \}; 3. if isEmpty($\text{aux}$) then \{ $X = \text{MIN}(\text{aux})$; \} single $\equiv \emptyset$; nosingle $\equiv \emptyset$; 4. **foreach** rule $r_{C,k} = \text{NAC} \overset{\mathcal{M}}{\rightarrow} L \rightarrow R \in \mathcal{R}(C) \cap \text{LAYER}_k(t)$ do \{ 1. if $|V(L)| = 1$ then \{ single $\equiv \text{single} \cup \{r_{C,k}\}$ \} else \{ nosingle $\equiv \text{nosingle} \cup \{r_{C,k}\}$; \}; 2. if (inner) then \{ $\text{UNIT} = \text{concat}(\text{UNIT}, \text{alt}(\text{reuseContext}(\text{single}, X)))$; $\text{UNIT} = \text{concat}(\text{UNIT}, (\text{alt}(\text{reuseContext}(\text{nosingle}, X))))$; \} 3. else \{ $\text{UNIT} = \text{concat}(\text{UNIT}, \text{alt}(\text{mark}(\text{reuseContext}(\text{single}, X))))$; $\text{UNIT} = \text{concat}(\text{UNIT}, (\text{alt}(\text{mark}(\text{reuseContext}(\text{nosingle}, X))))$; \} 5. if (k == 2) then \{ $\text{foreach } t' \in \text{create}(r_{C,k})$ do \{ 6. \} \} \} UNIT = concat(UNIT, RecursiveGen(t',S ∪ {t}, true)); foreach rule r : L → R in UNIT do { foreach condition C = \exists X ∈ Θ do { replace r with genNAC(r,X); } UNIT = concat(UNIT, delLoop); } return UNIT Theorem 1 A call CreateGenUnit(t, Θ): 1) terminates, and 2) produces a correct unit TU_g s.t. given a graph G typed on TG s.t. G \models Θ, ∀H s.t. G →^{TU_g}_Θ H, we have H \models Θ \cup \{ \exists(G + \{7\}) \}. Proof. (Sketch) 1) The first nested loop performs a finite number of iterations on conditions, layers and rules. The recursion on recursiveGen terminates since the set S increases in size on each call. The final iteration to add NACs occurs on a finite number of conditions and rules. 2) If the first rule is applicable, then the application of TU_g(t) terminates on each finite graph G s.t. G \models Θ. Indeed, the NACs prevent repeated applications of a rule on identical matches, and even if new matches can be created, the layering prevents infinite repetition of the execution of a rule. Moreover, the application of reuseContext avoids arbitrary generation of new elements. If a graph H is obtained, then H \models \exists(G + \{7\}), as only increasing rules have been applied. Suppose now that H \not\models Θ. Then either: 1) H \not\models C_i for some C_i in some Q_i(t), but this is impossible as this is prevented by the use of genNAC; or 2) H \not\models \exists Q_i, but this is impossible as H \models H \subseteq G + \{7\} ; H; or 3) H \models C_i for some C_i in some Q_i(t), but this is impossible as all the rules are derived from some Q_i(t) and all matches for their premises have been considered. 6 Application to Spider Diagrams Contrasted to algorithmic definitions of inference figures for Spider Diagrams, the proposed approach allows the modeling both of syntactically correct Spider Diagram and of an operational system, admitting intermediary-type diagrams with some syntactic constraints relaxed. We now apply the constructions in Section 5. Firstly, considering the addition of a Curve, we have conditions Q_2(Curve) = \{C_1\}, Q_3(Curve) = \{(C_7),(C_8,C_9), (C_{10}), (C_{11},C_{12})\}, where we have abused notation for universal conditions to indicate their ordering according to ≤_Q; e.g. the premise of C_7 is included in the premise of both C_8 and C_9. The layers associated to the type Curve are as follows. LAYER_1(Curve) contains rules generated from C_7, C_11, C_12 and from the first two graphs in the bottom box of C_10 since these only add edges incident with nodes of type Curve. LAYER_2(Curve) contains rules generated from C_8, C_9 which add Zone nodes, whilst LAYER_3(Curve) consists of the rule generated from the last graph in the bottom box in C_10 which adds an edge between nodes of type Zone. Note that depending on which iteration of the rules derived from C_8 or C_9 is applied first, the other iteration will be performed vacuously. The same thing happens for C_11 and C_12. Figure 9 shows a version of the rules derived from condition C_{10}, with one choice of marking. Each possible conclusion of the rules from C_10 give rise to a NAC, preventing re-application of the rule to the same match, and the set of three NACs (these define the j_n morphisms in the construction provided earlier) is presented together at the bottom left of the figure. Using the same rule naming scheme as in Figure 9, and the initial rule r_{curve} : Θ → Curve, the Preserving constraints algorithm produces a transformation unit of the form: $$\text{TU}(\text{addCurve}) = \{ r_\text{Curve}^\dagger \cdot (r7.1^\dagger \cdot r7.2^\dagger) \cdot (r10.1^\dagger \cdot r10.2^\dagger) \cdot (r11^\dagger) \cdot (r12^\dagger) \cdot (r8^\dagger) \cdot (r9^\dagger) \cdot (r4.1^\dagger \cdot r4.2^\dagger) \cdot r6^\dagger \cdot (r7.1 \cdot r7.2) \cdot (r10.1 \cdot r10.2 \cdot r10.3) \cdot r11^\dagger \cdot r12^\dagger \cdot r10.3 \cdot (r4.1 \cdot r4.2) \cdot r6^\dagger \cdot (r7.1 \cdot r7.2) \cdot (r10.1 \cdot r10.2 \cdot r10.3) \cdot r11^\dagger \cdot r12^\dagger \cdot r10.3 \}$$ The iterations on rules from $r4.i$ to $r12$ in the second row derive from the fact that some Zone is created in the previous rules, so that the second top-level loop must be started, reusing context to prevent the creation of new curves. The analogous construction for the type Zone is based on the following specifications $Q_\exists(\text{Zone}) = \{ C1 \}, Q_\forall(\text{Zone}) = \{ (C4), (C6, C7), (C8, C9), (C10), (C11, C12) \}$. $\text{LAYER}_1(\text{Zone})$ contains the rules generated from $C7, C10, C11$, while $\text{LAYER}_2(\text{Zone})$ contains the rules generated from $C8$ and $C9$. In this case, the unit will first define the relations of the new zone with the existing curves, according to the rule from $C4$, then create a required new curve (as the context does not provide one to satisfy $C6$). After the iteration of the rule for $C7$, the context again will not be sufficient for the application of the rules from $C8$ and $C9$. Finally, the rules from $C10, C11$ and $C12$ will adjust the relations with the newly created curves and among all zones. ![Figure 9: A marked version of the 3 rules derived from condition C10 and the non-marked NAC.](image) For a Spider, we have $Q_\exists(\text{Spider}) = \emptyset$, $Q_\forall(\text{Spider}) = \{ (C2) \}$, generating a rule in layer 2. While the creation of a Spider requires the creation of a Foot, the Zone will be taken from the context, due to its presence in $Q$, so that it has been incorporated by the application of $\text{reuseContext}$. Insertion of a Foot will instead require the creation of a new Spider, if none exists, or its reuse if one had already been created. However, such a creation will fail if the spider has already a foot in each existing zone. In a similar way, a unit for the deletion of a curve would first remove the twin edges between zones attached to the marked curve, then all edges from all other curves to these zones, then all zones attached with an inside or outside edge to the curve to be removed, then all remaining connections from the marked Curve node to be deleted, and finally the marked node itself. Removal of a spider would be preceded by removal of all its feet and their attachments to zones. The construction of such units is beyond the scope of this paper. ## 7 Conclusions We have provided a methodology for the automatic derivation of transformation units from a principal rule via an algorithm that iteratively adds restorative rules to a unit for increasing rules. As a result, membership in the model language is ensured before and after the application of the unit, but not necessarily throughout the unit. The methodology exploits a rule layering approach, and rules are generated from graph conditions taking into account the rule application context. The automatic production of the rules needed to reassemble a syntactically correct diagram simplifies the specification of diagrammatic inference rules and supports therefore the development and comparison of syntactic and semantic variations of the systems. Future work will define a similar algorithm for deleting rules, adding preparatory rules for performing a final deletion. Of course, semantic considerations play a greater role than simple syntactic constraints. However, the constructed rules may provide a basis to be extended with additional context and consequences. For example, the specification of a transformation via pre- and post-conditions can be used to integrate syntactic rules with specific side effects. In this sense, this construction provides more flexibility to modelers, who can define the language through conditions, the main goal of a transformation and the desired side effects in an independent manner. This removes the need to consider complex interplays between rules and constraints, as in approaches which derive amalgamated rules which have to achieve a global effect with a single specification. We notice that most transformations involve redirection of associations from one element to another, or changing the context for an element. The construction presented in the paper can be adapted to define *accumulators* and *distributors* of associations, which would collect all edges to be redirected, while deleting or constructing elements. Hence, such redirections might be taken as primitive constructs. The approach has been presented only for typed graphs. Extensions to graphs with inheritance and with attributes have to be explored, in particular for the case where identifiers are used to describe the associations of an element with others. This would be useful also in other domains. For instance, model refactoring often involves the elimination of elements, or the creation of suitable contexts for their insertion. One example is the elimination of a composite state in a Statechart which requires the elimination of all of its internal states. Then, given a set of conditions stating that each state must be contained within a composite state, the construction in Section 5 could be applied to generate transformation units to be recursively invoked to visit the nesting tree. Another refactoring example is that of moving a method. This requires placing it in a different class and redirecting all its invocations, as well as the messages which may originate from its invocation, to its new location. Our construction can thus be used to manage the identification of the arcs related to such a method. **Acknowledgements:** Partially funded by UK EPSRC grant EP/E011160: Visualisation with Euler Diagrams. **Bibliography** Preserving constraints
{"Source-Url": "https://journal.ub.tu-berlin.de/eceasst/article/download/410/386", "len_cl100k_base": 11628, "olmocr-version": "0.1.42", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 55401, "total-output-tokens": 13751, "length": "2e13", "weborganizer": {"__label__adult": 0.00041103363037109375, "__label__art_design": 0.0011463165283203125, "__label__crime_law": 0.0004589557647705078, "__label__education_jobs": 0.0030803680419921875, "__label__entertainment": 0.00018274784088134768, "__label__fashion_beauty": 0.00023674964904785156, "__label__finance_business": 0.0004649162292480469, "__label__food_dining": 0.0005321502685546875, "__label__games": 0.0009598731994628906, "__label__hardware": 0.0008592605590820312, "__label__health": 0.0007863044738769531, "__label__history": 0.0006146430969238281, "__label__home_hobbies": 0.00020897388458251953, "__label__industrial": 0.0008616447448730469, "__label__literature": 0.0012922286987304688, "__label__politics": 0.000396728515625, "__label__religion": 0.0008058547973632812, "__label__science_tech": 0.273681640625, "__label__social_life": 0.00021219253540039065, "__label__software": 0.01511383056640625, "__label__software_dev": 0.6962890625, "__label__sports_fitness": 0.00035881996154785156, "__label__transportation": 0.0007891654968261719, "__label__travel": 0.0002722740173339844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46886, 0.01849]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46886, 0.64544]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46886, 0.84536]], "google_gemma-3-12b-it_contains_pii": [[0, 242, false], [242, 3397, null], [3397, 7190, null], [7190, 10792, null], [10792, 14414, null], [14414, 17998, null], [17998, 20455, null], [20455, 22173, null], [22173, 26318, null], [26318, 31012, null], [31012, 34507, null], [34507, 37952, null], [37952, 41079, null], [41079, 44497, null], [44497, 46886, null]], "google_gemma-3-12b-it_is_public_document": [[0, 242, true], [242, 3397, null], [3397, 7190, null], [7190, 10792, null], [10792, 14414, null], [14414, 17998, null], [17998, 20455, null], [20455, 22173, null], [22173, 26318, null], [26318, 31012, null], [31012, 34507, null], [34507, 37952, null], [37952, 41079, null], [41079, 44497, null], [44497, 46886, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46886, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46886, null]], "pdf_page_numbers": [[0, 242, 1], [242, 3397, 2], [3397, 7190, 3], [7190, 10792, 4], [10792, 14414, 5], [14414, 17998, 6], [17998, 20455, 7], [20455, 22173, 8], [22173, 26318, 9], [26318, 31012, 10], [31012, 34507, 11], [34507, 37952, 12], [37952, 41079, 13], [41079, 44497, 14], [44497, 46886, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46886, 0.0]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
a1619a03276af9574d6aef5056e4d48629865fad
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01905967/file/esocc18.pdf", "len_cl100k_base": 9500, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 45597, "total-output-tokens": 12078, "length": "2e13", "weborganizer": {"__label__adult": 0.0004210472106933594, "__label__art_design": 0.0005340576171875, "__label__crime_law": 0.0003941059112548828, "__label__education_jobs": 0.0011548995971679688, "__label__entertainment": 0.00018036365509033203, "__label__fashion_beauty": 0.0002338886260986328, "__label__finance_business": 0.0006890296936035156, "__label__food_dining": 0.00044655799865722656, "__label__games": 0.0009212493896484376, "__label__hardware": 0.001689910888671875, "__label__health": 0.0009737014770507812, "__label__history": 0.00037169456481933594, "__label__home_hobbies": 0.0001112222671508789, "__label__industrial": 0.0004818439483642578, "__label__literature": 0.0005283355712890625, "__label__politics": 0.0003020763397216797, "__label__religion": 0.0004773139953613281, "__label__science_tech": 0.253173828125, "__label__social_life": 0.00015592575073242188, "__label__software": 0.025177001953125, "__label__software_dev": 0.71044921875, "__label__sports_fitness": 0.0003101825714111328, "__label__transportation": 0.0006036758422851562, "__label__travel": 0.00025391578674316406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 41096, 0.05656]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 41096, 0.32853]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 41096, 0.83517]], "google_gemma-3-12b-it_contains_pii": [[0, 1127, false], [1127, 3664, null], [3664, 6672, null], [6672, 9889, null], [9889, 12183, null], [12183, 14485, null], [14485, 16502, null], [16502, 18952, null], [18952, 20606, null], [20606, 23788, null], [23788, 25910, null], [25910, 27972, null], [27972, 30662, null], [30662, 33629, null], [33629, 36515, null], [36515, 40002, null], [40002, 41096, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1127, true], [1127, 3664, null], [3664, 6672, null], [6672, 9889, null], [9889, 12183, null], [12183, 14485, null], [14485, 16502, null], [16502, 18952, null], [18952, 20606, null], [20606, 23788, null], [23788, 25910, null], [25910, 27972, null], [27972, 30662, null], [30662, 33629, null], [33629, 36515, null], [36515, 40002, null], [40002, 41096, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 41096, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 41096, null]], "pdf_page_numbers": [[0, 1127, 1], [1127, 3664, 2], [3664, 6672, 3], [6672, 9889, 4], [9889, 12183, 5], [12183, 14485, 6], [14485, 16502, 7], [16502, 18952, 8], [18952, 20606, 9], [20606, 23788, 10], [23788, 25910, 11], [25910, 27972, 12], [27972, 30662, 13], [30662, 33629, 14], [33629, 36515, 15], [36515, 40002, 16], [40002, 41096, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 41096, 0.10714]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
feeaff8bc34b585f38848b2aaf5886b0df83d1ab
CSE341: Programming Languages Spring 2014 Unit 4 Summary Standard Description: This summary covers roughly the same material as class and recitation section. It can help to read about the material in a narrative style and to have the material for an entire unit of the course in a single document, especially when reviewing the material later. Please report errors in these notes, even typos. This summary is not a sufficient substitute for attending class, reading the associated code, etc. Contents Modules for Namespace Management We start by showing how ML modules can be used to separate bindings into different namespaces. We then build on this material to cover the much more interesting and important topic of using modules to hide bindings and types. To learn the basics of ML, pattern-matching, and functional programming, we have written small programs that are just a sequence of bindings. For larger programs, we want to organize our code with more structure. In ML, we can use structures to define modules that contain a collection of bindings. At its simplest, you can write `structure Name = struct bindings end` where `Name` is the name of your structure (you can pick anything; capitalization is a convention) and `bindings` is any list of bindings, containing values, functions, exceptions, datatypes, and type synonyms. Inside the structure you can use earlier bindings just like we have been doing “at top-level” (i.e., outside of any module). Outside the structure, you refer to a binding `b` in `Name` by writing `Name.b`. We have already been using this notation to use functions like `List.foldl`; now you know how to define your own structures. Though we will not do so in our examples, you can nest structures inside other structures to create a tree-shaped hierarchy. But in ML, modules are not expressions: you cannot define them inside of functions, store them in tuples, pass them as arguments, etc. If in some scope you are using many bindings from another structure, it can be inconvenient to write `SomeLongStructureName.foo` many times. Of course, you can use a val-binding to avoid this, e.g., `val foo = SomeLongStructureName.foo`, but this technique is ineffective if we are using many different bindings from the structure (we would need a new variable for each) or for using constructor names from the structure in patterns. So ML allows you to write `open SomeLongStructureName`, which provides “direct” access (you can just write `foo`) to any bindings in the module that are mentioned in the module’s signature. The scope of an open is the rest of the enclosing structure (or the rest of the program at top-level). A common use of `open` is to write succinct testing code for a module outside the module itself. Other uses of `open` are often frowned upon because it may introduce unexpected shadowing, especially since different modules may reuse binding names. For example, a list module and a tree module may both have functions named `map`. Signatures So far, structures are providing just namespace management, a way to avoid different bindings in different parts of the program from shadowing each other. Namespace management is very useful, but not very interesting. Much more interesting is giving structures signatures, which are types for modules. They let us provide strict interfaces that code outside the module must obey. ML has several ways to do this with subtly different syntax and semantics; we just show one way to write down an explicit signature for a module. Here is an example signature definition and structure definition that says the structure MyMathLib must have the signature MATHLIB: ``` signature MATHLIB = sig val fact : int -> int val half_pi : real val doubler : int -> int end structure MyMathLib :> MATHLIB = struct fun fact x = if x=0 then 1 else x * fact (x - 1) val half_pi = Math.pi / 2.0 fun doubler y = y + y end ``` Because of the :> MATHLIB, the structure MyMathLib will type-check only if it actually provides everything the signature MATHLIB claims it does and with the right types. Signatures can also contain datatype, exception, and type bindings. Because we check the signature when we compile MyMathLib, we can use this information when we check any code that uses MyMathLib. In other words, we can just check clients assuming that the signature is correct. **Hiding Things** Before learning how to use ML modules to hide implementation details from clients, let’s remember that separating an interface from an implementation is probably the most important strategy for building correct, robust, reusable programs. Moreover, we can already use functions to hide implementations in various ways. For example, all 3 of these functions double their argument, and clients (i.e., callers) would have no way to tell if we replaced one of the functions with a different one: ``` fun double1 x = x + x fun double2 x = x * 2 val y = 2 fun double3 x = x * y ``` Another feature we use for hiding implementations is defining functions locally inside other functions. We can later change, remove, or add locally defined functions knowing the old versions were not relied on by any other code. From an engineering perspective, this is a crucial separation of concerns. I can work on improving the implementation of a function and know that I am not breaking any clients. Conversely, nothing clients can do can break how the functions above work. But what if you wanted to have two top-level functions that code in other modules could use and have both of them use the same hidden functions? There are ways to do this (e.g., create a record of functions), but it would be convenient to have some top-level functions that were “private” to the module. In ML, there is no “private” keyword like in other languages. Instead, you use signatures that simply mention less: anything not explicitly in a signature cannot be used from the outside. For example, if we change the signature above to: signature MATHLIB = sig val fact : int -> int val half_pi : real end then client code cannot call MyMathLib.doubler. The binding simply is not in scope, so no use of it will type-check. In general, the idea is that we can implement the module however we like and only bindings that are explicitly listed in the signature can be called directly by clients. Introducing our extended example The rest of our module-system study will use as an example a small module that implements rational numbers. While a real library would provide many more features, ours will just support creating fractions, adding two fractions, and converting fractions to strings. Our library intends to (1) prevent denominators of zero and (2) keep fractions in reduced form (3/2 instead of 9/6 and 4 instead of 4/1). While negative fractions are fine, internally the library never has a negative denominator (−3/2 instead of 3/−2 and 3/2 instead of −3/−2). The structure below implements all these ideas, using the helper function reduce, which itself uses gcd, for reducing a fraction. Our module maintains invariants, as seen in the comments near the top of the code. These are properties of fractions that all the functions both assume to be true and guarantee to keep true. If one function violates the invariants, other functions might do the wrong thing. For example, the gcd function is incorrect for negative arguments, but because denominators are never negative, gcd is never called with a negative argument. structure Rational1 = struct (* Invariant 1: all denominators > 0 Invariant 2: rationals kept in reduced form, including that a Frac never has a denominator of 1 *) datatype rational = Whole of int | Frac of int*int exception BadFrac (* gcd and reduce help keep fractions reduced, but clients need not know about them *) (* they _assume_ their inputs are not negative *) fun gcd (x,y) = if x=y then x else if x < y then gcd(x,y-x) else gcd(y,x) fun reduce r = case r of Whole _ => r | Frac(x,y) => | if x=0 | then Whole 0 | else let val d = gcd(abs x,y) in (* using invariant 1 *) | if d=y | then Whole(x div d) | else Frac(x div d, y div d) | end (* when making a frac, we ban zero denominators *) fun make_frac (x,y) = if y = 0 then raise BadFrac else if y < 0 then reduce(Frac(~x,~y)) else reduce(Frac(x,y)) (* using math properties, both invariants hold of the result assuming they hold of the arguments *) fun add (r1,r2) = case (r1,r2) of (Whole(i),Whole(j)) => Whole(i+j) | (Whole(i),Frac(j,k)) => Frac(j+k*i,k) | (Frac(j,k),Whole(i)) => Frac(j+k*i,k) | (Frac(a,b),Frac(c,d)) => reduce (Frac(a*d + b*c, b*d)) (* given invariant, prints in reduced form *) fun toString r = case r of Whole i => Int.toString i | Frac(a,b) => (Int.toString a) ^ "/" ^ (Int.toString b) end Signatures for Our Example Let us now try to give our example module a signature such that clients can use it but not violate its invariants. Since reduce and gcd are helper functions that we do not want clients to rely on or misuse, one natural signature would be as follows: signature RATIONAL_A = sig datatype rational = Frac of int * int | Whole of int exception BadFrac val make_frac : int * int -> rational val add : rational * rational -> rational val toString : rational -> string end To use this signature to hide gcd and reduce, we can just change the first line of the structure definition above to `structure Rational1 :> RATIONAL_A`. While this approach ensures clients do not call gcd or reduce directly (since they “do not exist” outside the module), this is not enough to ensure the bindings in the module are used correctly. What “correct” means for a module depends on the specification for the module (not the definition of the ML language), so let’s be more specific about some of the desired properties of our library for rational numbers: - Property: `toString` always returns a string representation in reduced form - Property: No code goes into an infinite loop - Property: No code divides by zero - Property: There are no fractions with denominators of 0 The properties are externally visible; they are what we promise clients. In contrast, the invariants are internal; they are facts about the implementation that help ensure the properties. The code above maintains the invariants and relies on them in certain places to ensure the properties, notably: - gcd will violate the properties if called with an arguments ≤ 0, but since we know denominators are > 0, reduce can pass denominators to gcd without concern. - toString and most cases of add do not need to call reduce because they can assume their arguments are already in reduced form. - add uses the property of mathematics that the product of two positive numbers is positive, so we know a non-positive denominator is not introduced. Unfortunately, under signature RATIONAL_A, clients must still be trusted not to break the properties and invariants! Because the signature exposed the definition of the datatype binding, the ML type system will not prevent clients from using the constructors Frac and Whole directly, bypassing all our work to establish and preserve the invariants. Clients could make “bad” fractions like `Rational.Frac(1,0), Rational.Frac(3,~2),` or `Rational.Frac(9,6)`, any of which could then end up causing gcd or toString to misbehave according to our specification. While we may have intended for the client only to use `make_frac, add, and toString`, our signature allows more. A natural reaction would be to hide the datatype binding by removing the line `datatype rational = Frac of int * int | Whole of int`. While this is the right intuition, the resulting signature makes no sense and would be rejected: it repeatedly mentions a type rational that is not known to exist. What we want to say instead is that there is a type rational but clients cannot know anything about what the type is other than it exists. In a signature, we can do just that with an abstract type, as this signature shows: ``` signature RATIONAL_B = sig type rational (* type now abstract *) exception BadFrac val make_frac : int * int -> rational val add : rational * rational -> rational val toString : rational -> string end ``` (Of course, we also have to change the first line of the structure definition to use this signature instead. That is always true, so we will stop mentioning it.) This new feature of abstract types, which makes sense only in signatures, is exactly what we want. It lets our module define operations over a type without revealing the implementation of that type. The syntax is just to give a type binding without a definition. The implementation of the module is unchanged; we are simply changing how much information clients have. Now, how can clients make rationals? Well, the first one will have to be made with make_frac. After that, more rationals can be made with make_frac or add. There is no other way, so thanks to the way we wrote make_frac and add, all rationals will always be in reduced form with a positive denominator. What RATIONAL_B took away from clients compared to RATIONAL_A is the constructors Frac and Whole. So clients cannot create rationals directly and they cannot pattern-match on rationals. They have no idea how they are represented internally. They do not even know rational is implemented as a datatype. Abstract types are a Really Big Deal in programming. A Cute Twist: Expose the Whole function By making the rational type abstract, we took away from clients the Frac and Whole constructors. While this was crucial for ensuring clients could not create a fraction that was not reduced or had a non-positive denominator, only the Frac constructor was problematic. Since allowing clients to create whole numbers directly cannot violate our specification, we could add a function like: fun make_whole x = Whole x to our structure and val make_whole : int -> rational to our signature. But this is unnecessary function wrapping; a shorter implementation would be: val make_whole = Whole and of course clients cannot tell which implementation of make_whole we are using. But why create a new binding make_whole that is just the same thing as Whole? Instead, we could just export the constructor as a function with this signature and no changes or additions to our structure: signature RATIONAL_C = sig type rational (* type still abstract *) exception BadFrac val Whole : int -> rational (* client knows only that Whole is a function *) val make_frac : int * int -> rational val add : rational * rational -> rational val toString : rational -> string end This signature tells clients there is a function bound to Whole that takes an int and produces a rational. That is correct: this binding is one of the things the datatype binding in the structure creates. So we are exposing part of what the datatype binding provides: that rational is a type and that Whole is bound to a function. We are still hiding the rest of what the datatype binding provides: the Frac constructor and pattern-matching with Frac and Whole. Rules for Signature Matching So far, our discussion of whether a structure “should type-check” given a particular signature has been rather informal. Let us now enumerate more precise rules for what it means for a structure to match a signature. (This terminology has nothing to do with pattern-matching.) If a structure does not match a signature assigned to it, then the module does not type-check. A structure Name matches a signature BLAH if: - For every val-binding in BLAH, Name must have a binding with that type or a more general type (e.g., the implementation can be polymorphic even if the signature says it is not — see below for an example). This binding could be provided via a val-binding, a fun-binding, or a datatype-binding. - For every non-abstract type-binding in BLAH, Name must have the same type binding. - For every abstract type-binding in BLAH, Name must have some binding that creates that type (either a datatype binding or a type synonym). Notice that Name can have any additional bindings that are not in the signature. Equivalent Implementations Given our property- and invariant-preserving signatures RATIONAL_B and RATIONAL_C, we know clients cannot rely on any helper functions or the actual representation of rationals as defined in the module. So we could replace the implementation with any equivalent implementation that had the same properties: as long as any call to the toString binding in the module produced the same result, clients could never tell. This is another essential software-development task: improving/changing a library in a way that does not break clients. Knowing clients obey an abstraction boundary, as enforced by ML’s signatures, is invaluable. As a simple example, we could make gcd a local function defined inside of reduce and know that no client will fail to work since they could not rely on gcd’s existence. More interestingly, let’s change one of the invariants of our structure. Let’s not keep rationals in reduced form. Instead, let’s just reduce a rational right before we convert it to a string. This simplifies make_frac and add, while complicating toString, which is now the only function that needs reduce. Here is the whole structure, which would still match signatures RATIONAL_A, RATIONAL_B, or RATIONAL_C: ``` structure Rational2 :> RATIONAL_A (* or B or C *) = struct datatype rational = Whole of int | Frac of int*int exception BadFrac fun make_frac (x,y) = if y = 0 then raise BadFrac else if y < 0 then Frac(~x,~y) else Frac(x,y) fun add (r1,r2) = case (r1,r2) of (Whole(i),Whole(j)) => Whole(i+j) | (Whole(i),Frac(j,k)) => Frac(j+k*i,k) ``` \[ \begin{align*} (Frac(j,k), Whole(i)) & \Rightarrow Frac(j+k+i, k) \\ (Frac(a,b), Frac(c,d)) & \Rightarrow Frac(a*d + b*c, b*d) \end{align*} \] fun toString r = let fun gcd (x,y) = if x=y then x else if x < y then gcd(x, y-x) else gcd(y, x) in case reduce r of Whole _ => r | Frac(x,y) => if x=0 then Whole 0 else let val d = gcd(abs x, y) in if d=y then Whole(x div d) else Frac(x div d, y div d) end end end If we give Rational1 and Rational2 the signature RATIONAL_A, both will type-check, but clients can still distinguish them. For example, Rational1.toString(Rational1.Frac(21,3)) produces "21/3", but Rational2.toString(Rational2.Frac(21,3)) produces "7". But if we give Rational1 and Rational2 the signature RATIONAL_B or RATIONAL_C, then the structures are equivalent for any possible client. This is why it is important to use restrictive signatures like RATIONAL_B to begin with: so you can change the structure later without checking all the clients. While our two structures so far maintain different invariants, they do use the same definition for the type rational. This is not necessary with signatures RATIONAL_B or RATIONAL_C; a different structure having these signatures could implement the type differently. For example, suppose we realize that special-casing whole-numbers internally is more trouble than it is worth. We could instead just use int*int and define this structure: structure Rational3 :> RATIONAL_B (* or C *)= struct type rational = int*int exception BadFrac fun make_frac (x,y) = if y = 0 then raise BadFrac else if y < 0 then ("x","y") end else (x,y) fun Whole i = (i,1) fun add ((a,b),(c,d)) = (a*d + c*b, b*d) fun toString (x,y) = if x=0 then "0" else let fun gcd (x,y) = if x=y then x else if x < y then gcd(x,y-x) else gcd(y,x) val d = gcd (abs x,y) val num = x div d val denom = y div d in Int.toString num ^ (if denom=1 then "" else "/" ^ (Int.toString denom)) end end (This structure takes the Rational2 approach of having toString reduce fractions, but that issue is largely orthogonal from the definition of rational.) Notice that this structure provides everything RATIONAL_B requires. The function make_frac is interesting in that it takes an int*int and return an int*int, but clients do not know the actual return type, only the abstract type rational. And while giving it an argument type of rational in the signature would match, it would make the module useless since clients would not be able to create a value of type rational. Nonetheless, clients cannot pass just any int*int to add or toString; they must pass something that they know has type rational. As with our other structures, that means rationals are created only by make_frac and add, which enforces all our invariants. Our structure does not match RATIONAL_A since it does not provide rational as a datatype with constructors Frac and Whole. Our structure does match signature RATIONAL_C because we explicitly added a function Whole of the right type. No client can distinguish our “real function” from the previous structures’ use of the Whole constructor as a function. The fact that fun Whole i = (i,1) matches val Whole : int -> rational is interesting. The type of Whole in the module is actually polymorphic: 'a -> 'a * int. ML signature matching allows 'a -> 'a * int to match int -> rational because 'a -> 'a * int is more general than int -> int * int and int -> rational is a correct abstraction of int -> int * int. Less formally, the fact that Whole has a polymorphic type inside the module does not mean the signature has to give it a polymorphic type outside the module. And in fact it cannot while using the abstract type since Whole cannot have the type 'a -> int * int or 'a -> rational. Different modules define different types While we have defined different structures (e.g., Rational1, Rational2, and Rational3) with the same signature (e.g., RATIONAL_B), that does not mean that the bindings from the different structures can be used with each other. For example, Rational1.toString(Rational2.make_frac(2,3)) will not type-check, which is a good thing since it would print an unreduced fraction. The reason it does not type-check is that Rational2.rational and Rational1.rational are different types. They were not created by the same datatype binding even though they happen to look identical. Moreover, outside the module we do not know they look identical. Indeed, Rational3.toString(Rational2.make_frac(2,3)) really needs not to type-check since Rational3.toString expects an int*int but Rational2.make_frac(2,3)) returns a value made out of the Rational2.Frac constructor. What is Type Inference? While we have been using ML type inference for a while now, we have not studied it carefully. We will first carefully define what type inference is and then see via several examples how ML type inference works. Java, C, and ML are all examples of statically typed languages, meaning every binding has a type that is determined “at compile-time,” i.e., before any part of the program is run. The type-checker is a compile-time procedure that either accepts or rejects a program. By contrast, Racket, Ruby, and Python are dynamically typed languages, meaning the type of a binding is not determined ahead of time and computations like binding 42 to x and then treating x as a string result in run-time errors. After we do some programming with Racket, we will compare the advantages and disadvantages of static versus dynamic typing as a significant course topic. Unlike Java and C, ML is implicitly typed, meaning programmers rarely need to write down the types of bindings. This is often convenient (though some disagree as to whether it makes code easier or harder to read), but in no way changes the fact that ML is statically typed. Rather, the type-checker has to be more sophisticated because it must infer (i.e., figure out) what the type annotations “would have been” had programmers written all of them. In principle, type inference and type checking could be separate steps (the inferencer could do its part and the checker could see if the result should type-check), but in practice they are often merged into “the type-checker.” Note that a correct type-inferencer must find a solution to what all the types should be whenever such a solution exists, else it must reject the program. Whether type inference for a particular programming language is easy, difficult, or impossible is often not obvious. It is not proportional to how permissive the type system is. For example, the “extreme” type systems that “accept everything” and “accept nothing” are both very easy to do inference for. When we say type inference may be impossible, we mean this in the technical sense of undecidability, like the famous halting problem. We mean there are type systems for which no computer program can implement type inference such that (1) the inference process always terminates, (2) the inference process always succeeds if inference is possible, and (3) the inference process always fails if inference is not possible. Fortunately, ML was rather cleverly designed so that type inference can be performed by a fairly straightforward and elegant algorithm. While there are programs for which inference is intractably slow, programs people write in practice never cause such behavior. We will demonstrate key aspects of the algorithm for ML type inference with a few examples. This will give you a sense that type inference is not “magic.” In order to move on to other course topics, we will not describe the full algorithm or write code to implement it. ML type inference ends up intertwined with parametric polymorphism — when the inferencer determines a function’s argument or result “could be anything” the resulting type uses ‘a, ‘b, etc. But type inference and polymorphism are entirely separate concepts: a language could have one or the other. For example, Java has generics but no inference for method argument/result types. **Overview of ML Type Inference** Here is an overview of how ML type inference works (more examples to follow): - It determines the types of bindings in order, using the types of earlier bindings to infer the types of later ones. This is why you cannot use later bindings in a file. (When you need to, you use mutual recursion and type inference determines the types of all the mutually recursive bindings together. Mutual recursion is covered later in this unit.) - For each `val` or `fun` binding, it analyzes the binding to determine necessary facts about its type. For example, if we see the expression `x+1`, we conclude that `x` must have type `int`. We gather similar facts for function calls, pattern-matches, etc. - Afterward, use type variables (e.g., `'a`) for any unconstrained types in function arguments or results. - (Enforce the value restriction — only variables and values can have polymorphic types, as discussed later.) The amazing fact about the ML type system is that “going in order” this way never causes us to reject a program that could type-check nor do we ever accept a program we should not. So explicit type annotations really are optional unless you use features like `#1`. (The problem with `#1` is that it does not give enough information for type inference to know what other fields the tuple/record should have, and the ML type system requires knowing the exact number of fields and all the fields' names.) Here is an initial, very simple example: ```ml val x = 42 fun f(y,z,w) = if y then z+x else 0 ``` Type inference first gives `x` type `int` since `42` has type `int`. Then it moves on to infer the type for `f`. Next we will study, via other examples, a more step-by-step procedure, but here let us just list the key facts: - `y` must have type `bool` because we test it in a conditional. - `z` must have type `int` because we add it to something we already determined has type `int`. - `w` can have any type because it is never used. - `f` must return an `int` because its body is a conditional where both branches return an `int`. (If they disagreed, type-checking would fail.) So the type of `f` must be `bool * int * 'a -> int`. **More Thorough Examples of ML Type Inference** We will now work through a few examples step-by-step, generating all the facts that the type-inference algorithm needs. Note that humans doing type inference “in their head” often take shortcuts just like humans doing arithmetic in their head, but the point is there is a general algorithm that methodically goes through the code gathering constraints and putting them together to get the answer. As a first example, consider inferring the type for this function: ```haskell fun f x = let val (y,z) = x in (abs y) + z end ``` Here is how we can infer the type: - Looking at the first line, f must have type $T_1 \rightarrow T_2$ for some types $T_1$ and $T_2$ and in the function body f has this type and x has type $T_1$. - Looking at the val-binding, x must be a pair type (else the pattern-match makes no sense), so in fact $T_1 = T_3 \times T_4$ for some $T_3$ and $T_4$, and $y$ has type $T_3$ and $z$ has type $T_4$. - Looking at the addition expression, we know from the context that abs has type $\text{int} \rightarrow \text{int}$, so $y$ having type $T_3$ means $T_3 = \text{int}$. Similarly, since abs $y$ has type $\text{int}$, the other argument to + must have type $\text{int}$, so $z$ having type $T_4$ means $T_4 = \text{int}$. - Since the type of the addition expression is $\text{int}$, the type of the let-expression is $\text{int}$. And since the type of the let-expression is $\text{int}$, the return type of f is $\text{int}$, i.e., $T_2 = \text{int}$. Putting all these constraints together, $T_1 = \text{int} \times \text{int}$ (since $T_1 = T_3 \times T_4$) and $T_2 = \text{int}$, so f has type $\text{int} \times \text{int} \rightarrow \text{int}$. Next example: ```haskell fun sum xs = case xs of [] => 0 | x::xs' => x + (sum xs') ``` - From the first line, there exists types $T_1$ and $T_2$ such that sum has type $T_1 \rightarrow T_2$ and xs has type $T_1$. - Looking at the case-expression, xs must have a type that is compatible with all of the patterns. Looking at the patterns, both of them match any list, since they are built from list constructors (in the x::xs’ case the subpatterns match anything of any type). So since xs has type $T_1$, in fact $T_1 = T_3 \times \text{list}$ from some type $T_3$. - Looking at the right-hand sides of the case branches, we know they must have the same type as each other and this type is $T_2$. Since 0 has type $\text{int}$, $T_2 = \text{int}$. - Looking at the second case branch, we type-check it in a context where x and xs’ are available. Since we are matching the pattern x::xs’ against a $T_3 \times \text{list}$, it must be that x has type $T_3$ and xs’ has type $T_3 \times \text{list}$. - Now looking at the right-hand side, we add x, so in fact $T_3 = \text{int}$. Moreover, the recursive call type-checks because xs’ has type $T_3 \times \text{list}$ and $T_3 \times \text{list}=T_1$ and sum has type $T_1 \rightarrow T_2$. Finally, since $T_2 = \text{int}$, adding $\text{sum xs’}$ type-checks. Putting everything together, we get sum has type $\text{int} \times \text{list} \rightarrow \text{int}$. Notice that before we got to \texttt{sum xs'} we had already inferred everything, but we still have to check that types are used consistently and reject otherwise. For example, if we had written \texttt{sum x}, that cannot type-check — it is \textit{inconsistent} with previous facts. Let us see this more thoroughly to see what happens: \begin{verbatim} fun broken_sum xs = case xs of [] => 0 | x::xs' => x + (broken_sum x) \end{verbatim} - Type inference for \texttt{broken_sum} proceeds largely the same as for \texttt{sum}. The first four bullets from the previous example all apply, giving \texttt{broken_sum type T3 list \rightarrow\textit{int}}, \texttt{x3 type T3 list}, \texttt{x type T3}, and \texttt{xs' type T3 list}. Moreover, \texttt{T3=int}. - We depart from the correct \texttt{sum} implementation with the call \texttt{broken_sum x}. For this call to type-check, \texttt{x} must have the same type as \texttt{broken_sum}'s parameter, or in other words, \texttt{T1=T3}. However, we know that \texttt{T1=T3 list}, so this new constraint \texttt{T1=T3} actually generates a contradiction: \texttt{T3=T3 list}. If we want to be more concrete, we can use our knowledge that \texttt{T3=int} to rewrite this as \texttt{int=int list}. Looking at the definition of \texttt{broken_sum} it should be obvious that this is exactly the problem: we tried to use \texttt{x} as an \texttt{int} and as an \texttt{int list}. When your ML program does not type-check, the type-checker reports the expression where it discovered a contradiction and what types were involved in that contradiction. While sometimes this information is helpful, other times the actual problem is with a different expression, but the type-checker did not reach a contradiction until later. **Examples with Polymorphic Types** Our remaining examples will infer polymorphic types. All we do is follow the same procedure we did above, but when we are done, we will have some parts of the function's type that are still \textit{unconstrained}. For each \texttt{Ti} that “can be anything” we use a type variable (\texttt{'}a, \texttt{'}b, etc.). \begin{verbatim} fun length xs = case xs of [] => 0 | x::xs' => 1 + (length xs') \end{verbatim} Type inference proceeds much like with \texttt{sum}. We end up determining: - \texttt{length} has type \texttt{T1->T2}. - \texttt{xs} has type \texttt{T1}. - \texttt{T1=T3 list} (due to the pattern-match) - \texttt{T2=int} because \texttt{0} can be the result of a call to \texttt{length}. - \texttt{x} has type \texttt{T3} and \texttt{xs'} has type \texttt{T3 list}. - The recursive call \texttt{length xs'} type-checks because \texttt{xs'} has type \texttt{T3 list}, which is \texttt{T1}, the argument type of \texttt{length}. And we can add the result because \texttt{T2=int}. So we have all the same constraints as for \texttt{sum}, except we do not have \texttt{T3=int}. In fact, \texttt{T3} can be anything and \texttt{length} will type-check. So type inference recognizes that when it is all done, it has \texttt{length} with type \texttt{T3 list -> int} and \texttt{T3} can be anything. So we end up with the type \texttt{\'a list -> int}, as expected. Again the rule is simple: for each \texttt{Ti} in the final result that cannot be constrained, use a type variable. A second example: \begin{verbatim} fun compose (f,g) = fn x => f (g x) \end{verbatim} - Since the argument to \texttt{compose} must be a pair (from the pattern used for its argument), \texttt{compose} has type \texttt{T1*T2->T3}, \texttt{f} has type \texttt{T1} and \texttt{g} has type \texttt{T2}. - Since \texttt{compose} returns a function, \texttt{T3} is some \texttt{T4->T5} where in that function’s body, \texttt{x} has type \texttt{T4}. - So \texttt{g} must have type \texttt{T4->T6} for some \texttt{T6}, i.e., \texttt{T2=T4->T6}. - And \texttt{f} must have type \texttt{T6->T7} for some \texttt{T7}, i.e., \texttt{T1=T6->T7}. - But the result of \texttt{f} is the result of the function returned by \texttt{compose}, so \texttt{T7=T5} and so \texttt{T1=T6->T5}. Putting together \texttt{T1=T6->T5} and \texttt{T2=T4->T6} and \texttt{T3=T4->T5} we have a type for \texttt{compose} of \texttt{(T6->T5)*(T4->T6) -> (T4->T5)}. There is nothing else to constrain the types \texttt{T4}, \texttt{T5}, and \texttt{T6}, so we replace them consistently to end up with \texttt{(a->b)*(c->a) -> (c->b)} as expected (and the last set of parentheses are optional, but that is just syntax). Here is a simpler example that also has multiple type variables: \begin{verbatim} fun f (x,y,z) = if true then (x,y,z) else (y,x,z) \end{verbatim} - The first line requires that \texttt{f} has type \texttt{T1*T2*T3 -> T4}, \texttt{x} has type \texttt{T1}, \texttt{y} has type \texttt{T2}, and \texttt{z} has type \texttt{T3}. - The two branches of the conditional must have the same type and this is the return type of the function \texttt{T4}. Therefore, \texttt{T4=T1*T2*T3} and \texttt{T4=T2*T1*T3}. This constraint requires \texttt{T1=T2}. Putting together these constraints (and no others), \texttt{f} will type-check with type \texttt{T1*T1*T3 -> T1*T1*T3} for any types \texttt{T1} and \texttt{T3}. So replacing each type consistently with a type variable, we get \texttt{a*a*b -> a*a*b}, which is correct: \texttt{x} and \texttt{y} must have the same type, but \texttt{z} can (but need not) have a different type. Notice that the type-checker always requires both branches of a conditional to type-check with the same type, even though here we know which branch will be evaluated. Optional: The Value Restriction \textit{As described so far} in this unit, the ML type system is \textit{unsound}, meaning that it would accept programs that when run could have values of the wrong types, such as putting an \texttt{int} where we expect a \texttt{string}. The problem results from a combination of polymorphic types and mutable references, and the fix is a special restriction to the type system called \textit{the value restriction}. This is an example program that demonstrates the problem: val r = ref NONE (* 'a option ref *) val _ = r := SOME "hi" (* instantiate 'a with string *) val i = 1 + valOf(!r) (* instantiate 'a with int *) Straightforward use of the rules for type checking/inference would accept this program even though we should not – we end up trying to add 1 to "hi". Yet everything seems to type-check given the types for the functions/operators ref ('a -> 'a ref), := ('a ref * 'a -> unit), and ! ('a ref -> 'a). To restore soundness, we need a stricter type system that does not let this program type-check. The choice ML made is to prevent the first line from having a polymorphic type. Therefore, the second and third lines will not type-check because they will not be able to instantiate an 'a with string or int. In general, ML will give a variable in a val-binding a polymorphic type only if the expression in the val-binding is a value or a variable. This is called the value restriction. In our example, ref NONE is a call to the function ref. Function calls are not variables or values. So we get a warning and r is given a type ?X1 option ref where ?X1 is a “dummy type,” not a type variable. This makes r not useful and the later lines do not type-check. It is not at all obvious that this restriction suffices to make the type system sound, but in fact it is sufficient. For r above, we can use the expression ref NONE, but we have to use a type annotation to give r a non-polymorphic type, such as int option ref. Whatever we pick, one of the next two lines will not type-check. As we saw previously when studying partial application, the value restriction is occasionally burdensome even when it is not a problem because we are not using mutation. We saw that this binding falls victim to the value-restriction and is not made polymorphic: val pairWithOne = List.map (fn x => (x,1)) We saw multiple workarounds. One is to use a function binding, even though without the value restriction it would be unnecessary function wrapping. This function has the desired type 'a list -> ('a * int) list: fun pairWithOne xs = List.map (fn x => (x,1)) xs One might wonder why we cannot enforce the value restriction only for references (where we need it) and not for immutable types like lists. The answer is the ML type-checker cannot always know which types are really references and which are not. In the code below, we need to enforce the value restriction on the last line, because 'a foo and 'a ref are the same type. type 'a foo = 'a ref val f : 'a -> 'a foo = ref val r = f NONE Because of ML’s module system, the type-checker does not always know the definition of type synonyms (recall this is a good thing). So to be safe, it enforces the value restriction for all types. Optional: Some Things that Make Type Inference More Difficult Now that we have seen how ML type inference works, we can make two interesting observations: • Inference would be more difficult if ML had subtyping (e.g., if every triple could also be a pair) because we would not be able to conclude things like, “T3=T1*T2” since the equals would be overly restrictive. We would instead need constraints indicating that T3 is a tuple with at least two fields. Depending on various details, this can be done, but type inference is more difficult and the results are more difficult to understand. • Inference would be more difficult if ML did not have parametric polymorphism since we would have to pick some type for functions like `length` and `compose` and that could depend on how they are used. **Mutual Recursion** We have seen many examples of recursive functions and many examples of functions using other functions as helper functions, but what if we need a function \( f \) to call \( g \) and \( g \) to call \( f \)? That can certainly be useful, but ML’s rule that bindings can only use earlier bindings makes it more difficult — which should come first, \( f \) or \( g \)? It turns out ML has special support for mutual recursion using the keyword `and` and putting the mutually recursive functions next to each other. Similarly, we can have mutually recursive `datatype` bindings. After showing these new constructs, we will show that you can actually work around a lack of support for mutually recursive functions by using higher-order functions, which is a useful trick in general and in particular in ML if you do not want your mutually recursive functions next to each other. Our first example uses mutual recursion to process an `int list` and return a `bool`. It returns true if the list strictly alternates between 1 and 2 and ends with a 2. Of course there are many ways to implement such a function, but our approach does a nice job of having for each “state” (such as “a 1 must come next” or “a 2 must come next”) a function. In general, many problems in computer science can be modeled by such finite state machines, and mutually recursive functions, one for each state, are an elegant way to implement finite state machines.\(^1\) ```ml fun match xs = let fun s_need_one xs = case xs of [] => true | 1::xs' => s_need_two xs' | _ => false and s_need_two xs = case xs of [] => false | 2::xs' => s_need_one xs' | _ => false in s_need_one xs end ``` (The code uses integer constants in patterns, which is an occasionally convenient ML feature, but not essential to the example.) In terms of syntax, we define mutually recursive functions by simply replacing the keyword `fun` for all functions except the first with `and`. The type-checker will type-check all the functions (two in the example above) together, allowing calls among them regardless of order. Here is a second (silly) example that also uses two mutually recursive `datatype` bindings. The definition of types `t1` and `t2` refer to each other, which is allowed by using `and` in place of `datatype` for the second one. This defines two new datatypes, `t1` and `t2`. ```ml datatype t1 = Foo of int | Bar of t2 ``` \(^1\)Because all function calls are tail calls, the code runs in a small amount of space, just as one would expect for an implementation of a finite state machine. and \( t2 = \text{Baz of string} \mid \text{Quux of t1} \) \[ \begin{align*} \text{fun no_zeros_or_empty_strings_t1 } x &= \quad \text{case } x \text{ of} \\ & \quad \text{Foo } i \Rightarrow i \neq 0 \\ & \quad \mid \text{Bar } y \Rightarrow \text{no_zeros_or_empty_strings_t2 } y \\ \text{and no_zeros_or_empty_strings_t2 } x &= \quad \text{case } x \text{ of} \\ & \quad \text{Baz } s \Rightarrow \text{size } s > 0 \\ & \quad \mid \text{Quux } y \Rightarrow \text{no_zeros_or_empty_strings_t1 } y \end{align*} \] Now suppose we wanted to implement the “no zeros or empty strings” functionality of the code above but for some reason we did not want to place the functions next to each other or we were in a language with no support for mutually recursive functions. We can write almost the same code by having the “later” function pass itself to a version of the “earlier” function that takes a function as an argument: \[ \begin{align*} \text{fun no_zeros_or_empty_strings_t1}(f, x) &= \quad \text{case } x \text{ of} \\ & \quad \text{Foo } i \Rightarrow i \neq 0 \\ & \quad \mid \text{Bar } y \Rightarrow f y \\ \text{fun no_zeros_or_empty_string_t2 } x &= \quad \text{case } x \text{ of} \\ & \quad \text{Baz } s \Rightarrow \text{size } s > 0 \\ & \quad \mid \text{Quux } y \Rightarrow \text{no_zeros_or_empty_strings_t1}(\text{no_zeros_or_empty_string_t2}, y) \end{align*} \] This is yet-another powerful idiom allowed by functions taking functions. ### Motivating and Defining Equivalence The idea that one piece of code is “equivalent” to another piece of code is fundamental to programming and computer science. You are informally thinking about equivalence every time you simplify some code or say, “here’s another way to do the same thing.” This kind of reasoning comes up in several common scenarios: - **Code maintenance**: Can you simplify, clean up, or reorganize code without changing how the rest of the program behaves? - **Backward compatibility**: Can you add new features without changing how any of the existing features work? - **Optimization**: Can you replace code with a faster or more space-efficient implementation? - **Abstraction**: Can an external client tell if I make this change to my code? Also notice that our use of restrictive signatures in the previous lecture was largely about equivalence: by using a stricter interface, we make more different implementations equivalent because clients cannot tell the difference. We want a precise definition of equivalence so that we can decide whether certain forms of code maintenance or different implementations of signatures are actually okay. We do not want the definition to be so strict that we cannot make changes to improve code, but we do not want the definition to be so lenient that replacing one function with an “equivalent” one can lead to our program producing a different answer. Hopefully, studying the concepts and theory of equivalence will improve the way you look at software written in any language. There are many different possible definitions that resolve this strict/lenient tension slightly differently. We will focus on one that is useful and commonly assumed by people who design and implement programming languages. We will also simplify the discussion by assuming that we have two implementations of a function and we want to know if they are equivalent. The intuition behind our definition is as follows: - A function \( f \) is equivalent to a function \( g \) (or similarly for other pieces of code) if they produce the same answer and have the same side-effects no matter where they are called in any program with any arguments. - Equivalence does not require the same running time, the same use of internal data structures, the same helper functions, etc. All these things are considered “unobservable”, i.e., implementation details that do not affect equivalence. As an example, consider two very different ways of sorting a list. Provided they both produce the same final answer for all inputs, they can still be equivalent no matter how they worked internally or whether one was faster. However, if they behave differently for some lists, perhaps for lists that have repeated elements, then they would not be equivalent. However, the discussion above was simplified by implicitly assuming the functions always return and have no other effect besides producing their answer. To be more precise, we need that the two functions when given the same argument in the same environment: 1. Produce the same result (if they produce a result) 2. Have the same (non)termination behavior; i.e., if one runs forever the other must run forever 3. Mutate the same (visible-to-clients) memory in the same way. 4. Do the same input/output 5. Raise the same exceptions These requirements are all important for knowing that if we have two equivalent functions, we could replace one with the other and no use anywhere in the program will behave differently. Another Benefit of Side-Effect-Free Programming One easy way to make sure two functions have the same side effects (mutating references, doing input/output, etc.) is to have no side effects at all. This is exactly what functional languages like ML encourage. Yes, in ML you could have a function body mutate some global reference or something, but it is generally bad style. Other functional languages are pure functional languages meaning there really is no way to do mutation inside (most) functions. *If* you “stay functional” by not doing mutation, printing, etc. in function bodies as a matter of policy, then callers can assume lots of equivalences they cannot otherwise. For example, can we replace \((f \ x)+(f \ x)\) with \((f \ x)*2\)? In general, that can be a wrong thing to do since calling \(f\) might update some counter or print something. In ML, that’s also possible, but far less likely as a matter of style, so we tend to have more. things be equivalent. In a purely functional language, we are guaranteed the replacement does not change anything. The general point is that mutation really gets in your way when you try to decide if two pieces of code are equivalent — it is a great reason to avoid mutation. In addition to being able to remove repeated computations (like \((f \ x)\) above) when maintaining side-effect-free programs, we can also reorder expressions much more freely. For example, in Java, C, etc.: ```plaintext int a = f(x); int b = g(y); return b - a; ``` might produce a different result from: ```plaintext return g(y) - f(x); ``` since \(f\) and \(g\) can get called in a different order. Again, this is possible in ML too, but if we avoid side-effects, it is much less likely to matter. (We might still have to worry about a different exception getting thrown and other details, however.) ### Standard Equivalences Equivalence is subtle, especially when you are trying to decide if two functions are equivalent without knowing all the places they may be called. Yet this is common, such as when you are writing a library that unknown clients may use. We now consider several situations where equivalence is guaranteed in any situation, so these are good rules of thumb and are good reminders of how functions and closures work. First, recall the various forms of syntactic sugar we have learned. We can always use or not use syntactic sugar in a function body and get an equivalent function. If we couldn’t, then the construct we are using is not actually syntactic sugar. For example, these definitions of \(f\) are equivalent regardless of what \(g\) is bound to: ```plaintext fun f x = fun f x = if x andalso g x then g x else false ``` Notice though, that we could not necessarily replace \(x\ andalso g x\) with \(if g x then x else false\) if \(g\) could have side effects or not terminate. Second, we can change the name of a local variable (or function parameter) provided we change all uses of it consistently. For example, these two definitions of \(f\) are equivalent: ```plaintext val y = 14 fun f x = x+y+x ``` ```plaintext val y = 14 fun f z = z+y+z ``` But there is one rule: in choosing a new variable name, you cannot choose a variable that the function body is already using to refer to something else. For example, if we try to replace \(x\) with \(y\), we get \(fun y = y+y+y\), which is not the same as the function we started with. A previously-unused variable is never a problem. Third, we can use or not use a helper function. For example, these two definitions of \(g\) are equivalent: val y = 14 fun g z = (z+y+z)+z fun f x = x+y+x fun g z = (f z)+z Again, we must take care not to change the meaning of a variable due to f and g having potentially different environments. For example, here the definitions of g are not equivalent: val y = 14 val y = 14 val y = 7 fun f x = x+y+x fun g z = (z+y+z)+z val y = 7 fun g z = (f z)+z Fourth, as we have explained before with anonymous functions, unnecessary function wrapping is poor style because there is a simpler equivalent way. For example, fun g y = f y and val g = f are always equivalent. Yet once again, there is a subtle complication. While this works when we have a variable like f bound to the function we are calling, in the more general case we might have an expression that evaluates to a function that we then call. Are fun g y = e y and val g = e always the same for any expression e? No. As a silly example, consider fun h() (print "hi" ; fn x => x+x) and e is h(). Then fun g y = (h()) y is a function that prints every time it is called. But val g = h() is a function that does not print — the program will print "hi" once when creating the binding for g. This should not be mysterious: we know that val-bindings evaluate their right-hand sides “immediately” but function bodies are not evaluated until they are called. A less silly example might be if h might raise an exception rather than returning a function. Fifth, it is almost the case that let val p = e1 in e2 end can be sugar for (fn p => e2) e1. After all, for any expressions e1 and e2 and pattern p, both pieces of code: - Evaluate e1 to a value - Match the value against the pattern p - If it matches, evaluate e2 to a value in the environment extended by the pattern match - Return the result of evaluating e2 Since the two pieces of code “do” the exact same thing, they must be equivalent. In Racket, this will be the case (with different syntax). In ML, the only difference is the type-checker: The variables in p are allowed to have polymorphic types in the let-version, but not in the anonymous-function version. For example, consider let val x = (fn y => y) in (x 0, x true) end. This silly code type-checks and returns (0,true) because x has type ’a->’a. But (fn x => (x 0, x true)) (fn y => y) does not type-check because there is no non-polymorphic type we can give to x and function-arguments cannot have polymorphic types. This is just how type-inference works in ML. Revisiting our Definition of Equivalence By design, our definition of equivalence ignores how much time or space a function takes to evaluate. So two functions that always returned the same answer could be equivalent even if one took a nanosecond and another took a million years. In some sense, this is a good thing since the definition would allow us to replace the million-year version with the nanosecond version. But clearly other definitions matter too. Courses in data structures and algorithms study asymptotic complexity precisely so that they can distinguish some algorithms as “better” (which clearly implies some “difference”) even though the better algorithms are producing the same answers. Moreover, asymptotic complexity, by design, ignores “constant-factor overheads” that might matter in some programs so once again this stricter definition of equivalence may be too lenient: we might actually want to know that two implementations take “about the same amount of time.” None of these definitions are superior. All of them are valuable perspectives computer scientists use all the time. Observable behavior (our definition), asymptotic complexity, and actual performance are all intellectual tools that are used almost every day by someone working on software.
{"Source-Url": "http://courses.cs.washington.edu/courses/cse341/14sp/notes/unit04.pdf", "len_cl100k_base": 13425, "olmocr-version": "0.1.49", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 53404, "total-output-tokens": 14651, "length": "2e13", "weborganizer": {"__label__adult": 0.00035309791564941406, "__label__art_design": 0.00035190582275390625, "__label__crime_law": 0.0002853870391845703, "__label__education_jobs": 0.0019388198852539065, "__label__entertainment": 7.402896881103516e-05, "__label__fashion_beauty": 0.0001404285430908203, "__label__finance_business": 0.0001246929168701172, "__label__food_dining": 0.00043272972106933594, "__label__games": 0.0006284713745117188, "__label__hardware": 0.0006041526794433594, "__label__health": 0.0003268718719482422, "__label__history": 0.00021469593048095703, "__label__home_hobbies": 9.804964065551758e-05, "__label__industrial": 0.00033974647521972656, "__label__literature": 0.0003292560577392578, "__label__politics": 0.00021779537200927737, "__label__religion": 0.0005249977111816406, "__label__science_tech": 0.00627899169921875, "__label__social_life": 0.00011473894119262697, "__label__software": 0.0033702850341796875, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.00032782554626464844, "__label__transportation": 0.0004818439483642578, "__label__travel": 0.00019872188568115232}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55897, 0.01112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55897, 0.66577]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55897, 0.90078]], "google_gemma-3-12b-it_contains_pii": [[0, 3435, false], [3435, 5797, null], [5797, 8037, null], [8037, 9356, null], [9356, 12299, null], [12299, 15183, null], [15183, 17851, null], [17851, 19438, null], [19438, 21780, null], [21780, 25969, null], [25969, 28479, null], [28479, 31394, null], [31394, 34211, null], [34211, 37512, null], [37512, 40831, null], [40831, 43630, null], [43630, 46424, null], [46424, 49559, null], [49559, 52181, null], [52181, 55037, null], [55037, 55897, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3435, true], [3435, 5797, null], [5797, 8037, null], [8037, 9356, null], [9356, 12299, null], [12299, 15183, null], [15183, 17851, null], [17851, 19438, null], [19438, 21780, null], [21780, 25969, null], [25969, 28479, null], [28479, 31394, null], [31394, 34211, null], [34211, 37512, null], [37512, 40831, null], [40831, 43630, null], [43630, 46424, null], [46424, 49559, null], [49559, 52181, null], [52181, 55037, null], [55037, 55897, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55897, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55897, null]], "pdf_page_numbers": [[0, 3435, 1], [3435, 5797, 2], [5797, 8037, 3], [8037, 9356, 4], [9356, 12299, 5], [12299, 15183, 6], [15183, 17851, 7], [17851, 19438, 8], [19438, 21780, 9], [21780, 25969, 10], [25969, 28479, 11], [28479, 31394, 12], [31394, 34211, 13], [34211, 37512, 14], [37512, 40831, 15], [40831, 43630, 16], [43630, 46424, 17], [46424, 49559, 18], [49559, 52181, 19], [52181, 55037, 20], [55037, 55897, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55897, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
726b8a69218e3b2474332bd0bc00bcc89c4d7d50
A writer's collaborative assistant The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Published Version</td> <td><a href="http://doi.acm.org/10.1145/502716.502722">http://doi.acm.org/10.1145/502716.502722</a></td> </tr> <tr> <td>Citable link</td> <td><a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600">http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600</a></td> </tr> <tr> <td>Terms of Use</td> <td>This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at <a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA">http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA</a></td> </tr> </tbody> </table> A Writer’s Collaborative Assistant Tamara Babaian CIS Dept., Bentley College Waltham, MA 02452 tbabaian@bentley.edu Barbara J. Grosz DEAS, Harvard University Cambridge, MA 02138 grosz@deas.harvard.edu Stuart M. Shieber DEAS, Harvard University Cambridge, MA 02138 shieber@deas.harvard.edu Abstract In traditional human-computer interfaces, a human master directs a computer system as a servant, telling it not only what to do, but also how to do it. Collaborative interfaces attempt to realign the roles, making the participants collaborators in solving the person’s problem. This paper describes Writer’s Aid, a system that deploys AI planning techniques to enable it to serve as an author’s collaborative assistant. Writer’s Aid differs from previous collaborative interfaces in both the kinds of actions the system partner takes and the underlying technology it uses to do so. While an author writes a document, Writer’s Aid helps in identifying and inserting citation keys and by autonomously finding and caching potentially relevant papers and their associated bibliographic information from various on-line sources. This autonomy, enabled by the use of a planning system at the core of Writer’s Aid, distinguishes this system from other collaborative interfaces. The collaborative design and its division of labor result in more efficient operation: faster and easier writing on the user’s part and more effective information gathering on the part of the system. Subjects in our laboratory user study found the system effective and the interface intuitive and easy to use. 1. Introduction and Motivation In traditional human-computer interfaces, a person acts as the master directing a computer-system servant. Collaborative interfaces [17] attempt to realign the roles, making the participants collaborators in solving the user’s problem. Formal models of collaboration [5, 8, 7] identify as some of the key features of a collaborative activity commitment to a shared, or joint, goal; an agreed-on division of labor; and communication between the parties to enable the satisfaction of joint goals. Whereas in a traditional interface the human user is the repository of all goals and takes all the initiative in determining ways to satisfy them, in a collaborative interface the participants establish shared goals and both take initiative in satisfying them. For example, the GLIDE system [16] is a network-diagram layout tool in which the user and the computer simultaneously and seamlessly work to satisfy the user’s layout goals. Goal-sharing is achieved by the user’s conveying layout goals through direct manipulation, and the division of labor in achieving the goals is implicit in the design of the system as a whole. Thus, a level of collaboration is achieved without explicit reasoning about goals or the state of the world. The Distributed Information Access for Learning (DIAL) system [13] provides for multi-media interactions with a complex information system; DIAL works with users to identify information relevant to their needs. The manner in which DIAL interacts collaboratively derives from the SharedPlans theory of collaboration [7]. DIAL uses explicit representations of recipes for domain actions and reasons about intentional contexts to lessen the amount of information a user needs to provide in querying the system. It demonstrates both the efficacy of deploying a model of collaboration to inform the design of a system and the system limitations that arise from limited reasoning about knowledge and actions. GLIDE and DIAL were designed to directly implement key features of a formal model of collaboration, handling various belief and intentional constructs implicitly. The formal model of collaboration is used as a design guide in the design of the system, but is not reasoned with directly. An alternative design philosophy is found in the Collagen system [14], in which the formal model is directly reasoned with, mechanisms are incorporated to manage databases of beliefs and intentions, and a recipe library of predefined plans is used. In this case, the formal model of collaboration is treated as a specification of the implementation. In this paper, we explore another part of the design space of collaborative interfaces. We describe a writer’s collaborative assistant, implemented in a system called Writer’s Aid, designed to support an author’s writing efforts by performing various bibliographic tasks that typically arise in the process of writing a research manuscript. As in GLIDE and DIAL, Writer’s Aid follows the design-guide approach. Also like earlier systems, the division of labor between the user and Writer’s Aid is predefined and constant. A distinguishing feature of Writer’s Aid is its ability to autonomously generate and execute plans to achieve goals provided by the user and adopted by the system. This autonomy, enabled by use of automated planning, also distinguishes Writer’s Aid from other collaborative interfaces with predefined recipes. It en- ables Writer’s Aid to act as a robust collaborative partner, undertaking tasks in the service of a joint goal (producing a manuscript with well-formed citations) and pursuing all known avenues to accomplish those tasks. The use of planning to organize the behavior of a collaborative system is especially important in tasks for which there is more than one possible course of action and where some of the actions may unpredictably fail. Dealing with bibliographic records and papers is one such problem domain. Papers and bibliographic information are often available from multiple electronic sources such as digital libraries, author’s homepages, and online bibliographies. It is burdensome for a person to search systematically and thoroughly different sources to locate papers and tedious for people to compose bibliographic records. Because Internet searches are typically incomplete, many authors also must consult hard copies of journals and conference proceedings. The creation of citations is also disruptive to the writing process. Most of such work is more appropriately done by a computer system that can plan for a wide variety of approaches to data gathering and pursue them exhaustively. Similarly, many actions, such as accessing bibliographic databases or web resources, can fail (for instance, due to a server failure). In such a case, a planner can dynamically recover and replan, efficiently reusing already obtained information, until a goal is satisfied or all ways of satisfying it fail. Planning has proven advantages in the task of information integration from multiple distributed sources; it hides from the user the process of data acquisition and manipulation [1, 10]. We take this idea further and weave such information integration into an ongoing human-computer collaboration on a broader task that is the source of the information need. This setup creates advantages for both parties and thus results in more efficient overall execution of the task. The user’s simultaneous involvement in editing the paper and expertise in the particular academic field provides the computer-assistant with highly selective query terms and thus results in a high likelihood of Writer’s Aid autonomously finding the necessary information. The system’s performance of various search and formatting actions saves the writer time and effort identifying and creating bibliographic records and locating viewable versions of cited papers, enabling more efficient paper writing. Besides being a natural framework for reasoning about goals and actions, planning offers advantages from the design and implementation standpoints. The declarative nature of planning-based interfaces allows extending the system by adding new types of user goals, new information sources, and new information retrieval actions independently of the existing code. As reported by Barish et al. [3] and confirmed by our own experience with Writer’s Aid, once the planning structure is in place, designing, extending and modifying the system in response to users’ requests required relatively little effort. This flexibility ensures that with more and more specialized searchable collections appearing on the Internet, Writer’s Aid’s repertoire of available search methods and sources will be easily augmented. Initial laboratory user studies have shown Writer’s Aid meets its design goals. In particular, most subjects (like many authors who are fluent in web technologies) ordinarily perform a sequence of online searches for bibliographic information and papers similar to those done by Writer’s Aid. Even for such users, Writer’s Aid’s freeing them from doing these tasks and providing relevant information during the writing process in a timely manner was of significant help. An overwhelming majority of users found the system useful (some characterizing it as very useful), reflecting how often it was able to find papers the user intended to cite. Users found the interface intuitive and easy to learn. These results are all the more impressive because little attention was spent in fine-tuning the surface features of Writer’s Aid; for example, the tested version of Writer’s Aid did not use any advanced context-based rank-ordering of the search results. A further example of Writer’s Aid’s usefulness is the preparation of this paper: some of the references cited were identified using Writer’s Aid and some of the bibliographic records and all inline citations were done by the system. Writer’s Aid is implemented on top of Carsten Dominik’s RefTex package for the GNU Emacs editor, and the BIBTEX and BibLATEX document typesetting systems. The front end is implemented in Emacs Lisp, the planner in Allegro Common Lisp, and web access in WebL [9]. Writer’s Aid is activated when the user opens a TEX document in the Emacs text editor. After giving an example to illustrate the use and advantages of Writer’s Aid, the paper enumerates characteristics of the bibliographic domain and task that underlie the design choices in Writer’s Aid and then presents details of the system. The system description includes a discussion of the major issues that arise in building collaborative interfaces that utilize planning in domains with incomplete information, especially the implications for the system architecture and knowledge representation and planning methods. We briefly outline extensions to classical planning methods to meet the demands of collaborative interfaces in domains with properties like the Writer’s Aid’s. The paper then presents results of initial user studies, describes related work, and concludes with a discussion of possible future extensions to the system. 2. Overview and Example To illustrate Writer’s Aid’s functions and main features, we will explore its use in the following scenario: An author, Ed, is writing a paper on collaborative interfaces. He decides to refer to Kinny et al.’s article on teamwork but he does not recall the title of the paper nor where it appeared. He does not want to interrupt his writing to locate the paper, but he does want to scan the paper once it is found to make sure his claims about it are accurate. **Entering a citation command:** Ed inserts a citation command with a special Emacs command. The system then prompts him to enter search parameters: keywords of the search and an indication of whether he wants only the bibliographic data on papers or the viewable versions as well. Ed enters Kinny and team as search keywords and selects the option of obtaining bibliographic records and viewable versions of relevant papers. **After a citation command is issued, a label resembling Figure 1.** The labels include the search keywords and type of search, a word indicating the status (SEARCHING OR DONE) and the number of bibliographic records and viewable papers found in reference to the particular citation command; they may be updated to reflect the most recent findings by a simple user request. While Ed continues writing (and inserting other citation commands) Writer’s Aid plans and executes a search for the material he has requested. To make the search more efficient and better suited to Ed’s needs, Writer’s Aid limits the search for bibliographic information and papers to his preferred bibliographies and paper collections. Writer’s Aid identifies preferred bibliographies semi-automatically at the time of installation by searching a user’s home directory for his own bibtex files and inspecting his browser’s bookmarks. At installation time, Writer’s Aid has identified as Ed’s preferred bibliographies his own bibtex files and two on-line scientific collections: ResearchIndex and ACM Digital Library. It constructs a plan to query Ed’s preferred bibliographic collections for the list of bibliographic records of papers that are related to the keywords Kinny and team. Once Writer’s Aid has collected the list of relevant paper titles from Ed’s bibtex file, ResearchIndex and ACM Digital Library it attempts to locate viewable version of each identified paper. Writer’s Aid’s arsenal includes actions for parsing bibtex files; querying various digital repositories (currently NEC Research Institute’s ResearchIndex and the ACM Digital Library) in search for papers, paper titles and authors’ homepages; parsing homepage in search for papers with a given title; and downloading papers from a given URL. ### Reviewing the results and selecting citation item To view the data that Writer’s Aid has collected in response to the citation command, Ed puts the cursor at the body of the citation command and issues a command to display the search results. The list of paper titles that has been compiled is displayed in a separate window, while the following options are a single keypress away: viewing and editing the bibtex record for an item; viewing the text of the paper, if it is available; selecting an item for citation. The prompt on the bottom of the selection buffer displays a help line with the commands for each option (see Figure 1). Ed reviews the list, scanning some of the papers by issuing a view command until he identifies the paper he wants to cite, namely “Planned Team Activity”. He selects this paper with a single keystroke, and Writer’s Aid ensures the citation is ready for compilation, that is, the appropriate bibliographic record is inserted in the bibliography file and the key for that record is placed in the text of the paper. ### 3. The Citation Application Domain The Writer’s Aid application has several characteristics that influenced the design of the system architecture and its constituent knowledge representation, reasoning, and planning systems. These requirements arise from two sets of characteristics: characteristics of the **interface**, that is, capabilities desired in the interaction with a person, and characteristics of the **domain**, that is the properties of references and citations. These characteristics also appear in many other applications for which collaborative interface systems would be beneficial, and hence their effect on system design are relevant beyond this particular application. We briefly describe these characteristics and their implications for the design and implementation of the collaborative interface system. #### 3.1 Interface Characteristics We discuss three interface requirements in this section, along with their implications for the implemented system. These requirements were considered in the initial design of the collaborative interface and later refined given the observations and interviews from our pilot user studies. **Anytime editing/search/access capability:** A key requirement of the interface is the seamless integration of the search and selection of papers for citation with the process of writing. A user can insert new citation commands and access possibly incomplete results of the search for any of the citation commands at any time while writing or editing a paper. To guarantee the user fast and effective access to bibliographic information for all citations, information requests arising from citation commands are processed in a round-robin fashion, working on tasks in the order of increasing complexity. For instance, querying a bibliography for relevant bibliographic records is easier and faster than searching for the viewable version of a paper. As a result, Writer’s Aid first attempts to locate the bibliographic records for all citations, and postpones attempting to satisfy goals related to obtaining their viewable versions.¹ **Availability of partial results and search status:** A user can access the results of a search and make a selection at any time, even when the search has not yet completed. When using Writer’s Aid, a person’s primary task, and hence focus, is typically on writing the paper. As a result, users usually do not explicitly monitor the progress of the system. However, Writer’s Aid informs the user of the progress of the search by updating the body of the citation command appearing in the text of the paper (see Figure 1). The display of search-status information is helpful in two ways: It enables early detection of queries that produce no matches (allowing reformulation of the citation command), and it is a way to inform users about completion status of a citation, before they start reviewing and selecting from the list of papers. ### 3.2 Domain Characteristics The domain of Writer’s Aid has two characteristics that directly affect the types of technology used in the underlying system, both relating to the **incompleteness** of the information possessed by the system. A major challenge to systems design is the inherent incompleteness of information about Writer’s Aid’s domain: bibliographic records, papers, their locations, keywords. A complete description of this domain cannot be provided a priori and can never be fully acquired. Rather, the system must be able to represent partial information and to reason about acquiring missing information that is necessary to satisfy the planning goals related to a user’s citation needs. Further, Writer’s Aid’s domain knowledge has local incompleteness; it is incomplete even with respect to properties of the objects the system knows about. For instance, it may not know which papers have a particular keyword in their abstracts or where viewable versions of a paper are located. As a result, actions in the bibliographic domain rely heavily on information gathering to in turn affect the actions to be taken. ¹However, a user can override this default and can focus Writer’s Aid specifically on getting a particular paper by using a special immediate citation command. The search for materials related to immediate citation is not abandoned until all possibilities are attempted, that is, until all related planning goals are either satisfied or found unsatisfiable. Figure 1: A snapshot of Writer’s Aid. In the middle Emacs window, the user has entered a set of citations in the text of a paper. The body of the citation command displays the status of the searches, the first of which is completed. The user is browsing the paper list from one of the incomplete searches in the front window. The rear window is showing the first paper from the list, retrieved by a single keystroke. taken subsequently. For example, the results of a query for relevant papers may determine which viewable versions of papers the system acquires. The system must therefore be able to interleave information acquisition and planning; this is a special case of interleaved planning and plan execution. Classical planning techniques are insufficient to handle these properties of the domain. To address inherent incompleteness, Writer’s Aid uses an expressive yet tractable logic, PSIPLAN[2], which allows efficient representation of incomplete information. To address local incompleteness and allow for information gathering, Writer’s Aid deploys a novel method for combining planning with execution of incomplete plans, which we call planning with hypotheticals. These important technical aspects of our solution are described in a later section. The domain characteristics interact with the interface characteristics. For instance, since Writer’s Aid begins with little knowledge about papers relevant to the user’s request, a substantial amount of information gathering may be required to satisfy a user’s requests. Because most of the information is obtained from remote sources over the Internet, it may take considerable time to identify, locate and download all of this information. On the other hand, it is very likely that the user will be satisfied with only partial results of the search, as conventional search engines often provide only partial results. To make partial results quickly available to the user (an important interface characteristic), Writer’s Aid’s design includes (i) formulation of the information request into a set of goals, processed in order of increasing likelihood of relevancy to the user, (ii) initial goal reduction to account for already available information, and (iii) round-robin processing of information requests in order of increasing search complexity. These features are described in more detail in the next sections of the paper. 4. Architecture Overview The architecture of Writer’s Aid contains the following three major components in addition to a front-end Emacs interface: - **State of Knowledge (SOK) and Goal (G) databases**: The SOK database contains Writer’s Aid’s knowledge about the user’s preferences and the world of bibliographies, papers and paper sources. The G database records the system’s goals. - **The Reasoning module (R)**: This module handles goal reduction with respect to the SOK database. The Planning Problem Manager (PPM): This module constructs and manages planning problems arising from a user’s citation requests. It includes a planning and execution module, PSIPOP-SE (PSIplan-based Partial Order Planner with Sensing and Execution), which constructs and executes individual plans. In brief, Writer’s Aid uses these components to handle a user’s citation command as follows: The command itself results in a goal being posted to the goal database G and the goal reduction module R being invoked as a separate thread. R consults the SOK database and computes the part of the goal that is already accomplished and the part that still remains to be achieved. It places the latter onto G, passing it to the planning problem manager, PPM. The PPM module creates an instance of a planning problem and hands it to the planner, PSIPOP-SE, which either constructs and executes a plan or reports failure if the planning problem is unsolvable. Upon executing the plan actions, Writer’s Aid updates the SOK database to reflect all changes in knowledge. For example, additional knowledge generated by an information-gathering action is added. Upon completion of its part, PPM removes the goals that were satisfied from the goal agenda, records the failure for the (sub)goals that PPM failed to achieve, and proceeds with the next goal. When a user issues a command to view a list of records and papers corresponding to a citation command, this information is derived from the SOK, formatted, and presented in a separate window for browsing. 4.1 SOK and Goal Formulation All of Writer’s Aid’s knowledge about the world is contained in the SOK database. As discussed above, this knowledge is assumed to be correct but incomplete. Since the system cannot have access to a complete description of the world, it must be able to effectively represent, reason, and plan with incomplete knowledge. Writer’s Aid uses the PSIPLAN language [2] which enables efficient representation of an agent’s incomplete knowledge about the world and knowledge goals and has an associated knowledge update procedure that is efficient. As described in the language specification [2], PSIPLAN entailment is sound, complete, and takes only polynomial time in the size of the agent’s SOK database. Alternative planning representations are either intractable in the general case, or, as with the tractable LCW (locally closed world) representation [6], lack completeness and sometimes discard correct information. Precision in reasoning about the world in the presence of the unknown bears directly on the ability to have non-redundancy of information gathering; it is thus especially critical for a system that uses costly (time-consuming) information-gathering actions. Incompleteness of reasoning may cause failure to construct all possible plans, which is also problematic for a collaborative agent. PSIPLAN formulas are either ground atoms over function-free terms, universally quantified negated clauses with exceptions, or knowledge propositions. For example the statement The only bibliographies preferred by Ed are the digital library of the ACM, and maybe the ResearchIndex database. is represented in PSIPLAN by the following two propositions:2 1. ACM’s digital library is a preferred bibliography, which is represented by a ground atom: \[ \text{PrefBib}(ACM) \] 2. Nothing is a preferred bibliography except for ACM and the ResearchIndex, which is expressed as the following quantified negated clause with exceptions: \[ \forall b \neg \text{PrefBib}(b) \lor b = AC\lor b = RI \] To represent that a value of a certain proposition is known, PSIPLAN uses knowledge propositions; \(KW(\text{PrefBib}(ACM))\) denotes that the agent knows the truth value of \(\text{PrefBib}(ACM)\), that is, the agent knows whether ACM is a preferred bibliography. To represent the user’s goals, Writer’s Aid extends PSIPLAN to handle implication goals of the form \(\forall x \exists y P(x, y) \implies Q(x, y)\) where \(x\) and \(y\) are sets of variables, and both \(P\) and \(Q\) are conjunctions of atoms. A user’s request to obtain papers relevant to subject \(Y\) is formulated as the following goal: For each paper that is relevant to subject \(Y\) according to some bibliography preferred by \(Ed\), get that paper and get the bibliographic record for it. This goal is instantiated as three separate PSIPLAN goal formulas. The first goal is to obtain all papers and bibliographic records of papers containing keywords \(Y\) in the title and referenced in the user’s own local bibliographic collections: \[ \forall p \exists b \text{PrefBib}(b) \land \text{LocalBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p) \] (1) The second goal extends the first to all of the user’s preferred bibliographic collections. \[ \forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p) \] (2) The last goal is to obtain all papers containing keywords \(Y\) in the text, rather than in the title. \[ \forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TextUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p) \] (3) The first goal is entailed by the second, which is entailed by the third; thus, the set of papers required by the first goal is subsumed by the set of second goal’s papers, which, in turn, is subsumed by the third goal (since a title is a part of the text). However, these three goals are posted and processed in the order presented above to explicitly prioritize \[ In this section, we use the following predicates: \text{PrefBib}(b) denotes that \(b\) is a preferred bibliography; \text{LocalBib}(b) denotes that \(b\) is a locally stored bibtex bibliography; \text{InCollection}(p, b) denotes paper \(p\) being in collection of bibliography \(b\); \text{TitleUses}(p, Y) denotes that keywords \(Y\) occur in \(p\)’s title (where by title we mean a combination of the title and author names); \text{TextUses}(p, Y) denotes that keywords \(Y\) occur in \(p\)’s full text including the title and author fields; \text{GotBib}(p) and \text{GotBib}(p) denote, respectively, that paper \(p\) and its bibliographic record are stored locally. the search for papers that are more likely to be in the desired set. Writer’s Aid is able to accomplish this incremental processing without doing redundant searches for the same information by saving in the SOK the information acquired during its attempts to satisfy the first and second goals. 4.2 Goal Reduction Once a goal is posted to the goal database $G$, the goal reduction module $R$ handles the processing of the goal. $R$ chooses a goal from $G$, reducing it with respect to the SOK, and passing it to PPM. When the planner returns, $R$ records success or failure in achieving the goal, and proceeds to the next one. For simplicity of presentation, we abbreviate a conjunction of predicates occurring in the left hand side of goals (1-3) above by a metapredicate $Rel(p, b, Y)$ to indicate that a paper $p$ is relevant to keywords $Y$ according to bibliography $b$, and drop $GotBib(p)$ from the right hand side. Thus, the goal with which we are concerned is $$g = \forall p \exists b PrefBib(b) \land Rel(p, b, Y) \implies Got(p) \quad (4)$$ To satisfy this goal, it is first necessary to find all papers that are relevant to $Y$ according to some preferred bibliography and then, for those papers only, construct a plan of obtaining them. Thus, $R$ transforms $g$ into two goals in PSIPLAN’s base language: 1. finding out the truth value of the conjunction $PrefBib(b) \land Rel(p, b, Y)$ for all possible values of $b$ and $p$, i.e. $$g_1 = \forall p \forall b KW(PrefBib(b) \land Rel(p, b, Y)),$$ and, after $g_1$ is achieved, 2. instances of $Got(p)$ corresponding to all values of $p$ for which $PrefBib(b) \land Rel(p, b, Y)$ is true. $R$ places $g_1$ as the next goal of $G$ and further reduces it with respect to SOK to identify the part that is not already known (e.g., as a result of previously executed information-gathering actions). This computation corresponds to a special PSIPLAN operation, called extended difference, denoted $\sim$. Given PSIPLAN propositions $A$ and $B$, $A \sim B$ is the set of propositions of $A$ that are not entailed by $B$. $R$ reduces any goal $g$ by computing the extended difference $g \sim SOK$. For example, given an information goal $g_1$ and an SOK that contains information that nothing is a preferred bibliography except for possibly the ACM digital library and the ResearchIndex, $R$ deduces that the only remaining information goals are $$g_2 = \forall p KW(PrefBib(ACM) \land Rel(p, ACM, Y)),$$ $$g_3 = \forall p KW(PrefBib(R1) \land Rel(p, R1, Y)).$$ passing $g_2$ and $g_3$ to the PPM. Such reduction of $g$, if not done prior to planning, would need to be carried out while planning to achieve this goal inside the planner itself. However, in our formalism no information ever gets lost, so that such early separation of yet unknown facts from those already known is an advantage, because it identifies exactly what goal the planner is working to achieve, and the user can access that information while the planner is working on the goal. The advantage becomes even more apparent if we consider having multiple agents working to achieve the goal. In such cases, reducing the goal initially prevents redundant computation. 4.3 Managing Planning Problems Once the reduced goal is computed, it is passed to PPM, the Planning Problem Manager, which takes care of creating, prioritizing, solving, and keeping track of the status of multiple planning tasks arising from goals adopted by Writer’s Aid. PPM consists of two major components: a list of planning problems, and a planning algorithm PSIPOP-SE, which constructs solution plans for individual planning problems. When a goal is passed to PPM, a new planning problem is created and passed to PSIPOP-SE, which searches for a solution plan, and returns the result. Each planning problem is a structure that records a planning goal, its solution, and the overall status of the planning problem, which is one of open, done, unsatisfiable. Open problems are those for which the solution plan has not been found, yet the goal has not yet been found to be unsatisfiable. If a solution plan is found and successfully executed, PPM removes the planning problem from the list of open problems and places it on the done list. If a solution is found but an action execution failure occurs, the failed action instance is recorded and never used again by the planner; the planning problem remains on the open list until the planner establishes that no alternative course of action exists. Unsatisfiable problems are those that have unachievable goals. Iterative Deepening in Hypotheticals: To guarantee step-by-step processing, and availability of partial results of the search for all of the user’s requests as motivated earlier, PPM processes open problems in a round-robin fashion, gradually increasing the maximum complexity level of finding and executing the solution plan. To implement the gradual increase of solution complexity, PPM performs iterative deepening in hypotheticals. A hypothetical is a partial plan that hypothesizes on the value of an unknown proposition or subgoal. For example, having no information on the location of a paper, the planner may adopt a hypothesis that the paper is available from a certain collection, and verify the information by querying the collection. An example of a plan with two hypotheses is a plan that hypothesizes that a paper is available from the author’s homepage, and then, having no information about the author’s homepage, hypothesizes that the URL for the homepage can be found from a known index. By verifying a hypothesis via execution of a sensing action, the planner eventually collects enough information, and thus reduces the incompleteness of the knowledge enough to find a solution plan or find the goal unsatisfiable. PPM maintains a list of all open problems, processed in a loop. At each cycle of the loop PPM attempts to find a solution for each open problem in turn, increasing the maximum allowed number of hypotheses in a solution plan when necessary, and executes the plan until the processing is completed and the problem is removed from the open list. This combination of iterative deepening in hypotheticals with round-robin processing of planning problems enables effective time sharing between the user’s goals, which is necessary for providing partial results on many user requests si- multaneously, and avoiding the bottlenecks of searching for a hard to find paper, which may not be the one desired by the user. 5. Evaluation We performed a pilot study with two users, followed by a user study involving eleven subjects. Most of the subjects were Harvard University students and postdocs; eleven are computer scientists, one a physicist. Most, though not all, of the subjects were familiar with Emacs and had previously written papers using \textsc{BibT\TeX} and \textsc{BibT\TeX}. The subjects were shown a brief, two-minute demonstration of the system; they were then given a printed tutorial\footnote{The tutorial is available at http://www.eecs.harvard.edu/~tbabaian/waid/tutor.ps} and asked to follow the steps of the tutorial. The subjects were next asked to write a paragraph or two of text in the area of their expertise involving citations, using Writer’s Aid. All the subjects used the same local bibliography collection, which overlapped with some of the citations some subjects desired to make, but most of the bibliographic records required by the authors were dynamically collected from ResearchIndex. To our surprise, even without access to the writer’s personal \textsc{BibT\TeX} database, but using only ResearchIndex as another preferred bibliography and the (dynamically located) authors’ homepages in the search for papers, Writer’s Aid was able in most cases to successfully locate at least bibliographic records for the papers. The success rate for finding viewable versions was more modest, but users still found the system very helpful. We expect a higher number of papers could be found by expanding the set of sources to include more online collections. After the test, subjects completed a questionnaire allowing freeform answers to the following questions: 1. How hard was it to learn to use the Writer’s Aid? 2. Was it useful? Would you use it for writing papers? 3. Which modifications to the functionality/interface of Writer’s Aid would you recommend? Some users were later interviewed to clarify their responses to Question 3. The success of Writer’s Aid is indicated by the answers to the Question 2. To the first part “Was Writer’s Aid useful?” the replies were: very useful (3), useful (7), moderately useful (1). To the question “Would you use it for writing papers?” ten users answered yes. (The single dissenting user explained that he would not trust any online source with his work; the single dissenting user explained that he would not trust any online source with his work.) To the question How hard was it to learn to use Writer’s Aid? 4 users answered very easy, 2 easy, and 5 reasonably easy or not hard. In response to Question 3, users suggested adding morphology-aware search, automatic spell checking of keywords, an ability to add a record to the personal bibliographic collection without citing it, and minor alterations to the window interface. We are planning to implement some of these features in the next version of Writer’s Aid. 6. Related Work and Future Directions Research presented in this paper has connections to work in several areas, most notably AI-based collaborative interfaces, information integration systems and Internet search. Like many other information integration systems, Writer’s Aid takes advantage of the breadth of bibliographic information available on the web. BIG [10] integrates several AI technologies, including resource-bounded planning and scheduling to conduct an offline search for information on software packages based on a client’s specification. Barish et al. [3] report on a query-planning-based system, called TheaterLoc, that searches online movie-related databases in real time in response to users’ queries. Writer’s Aid differs from these and other planning-based information-retrieval systems [11] in carrying out its activities in the context of collaboration with a user in the ongoing writing process, so that this writing process provides context for interpreting the information request. Writer’s Aid is also distinguished from other planning-based information retrieval systems by the capabilities it incorporates for interleaved planning and execution, crucial for integrating information-gathering into the planning process. Collagen [15] is a middleware package based on a theory of collaboration in dialogue [12]; it provides a means for creating interfaces that participate in dialogues with users about their goals and beliefs, suggesting possible courses of action based on the available library of act recipes. Collagen does not include capabilities for automated reasoning about goal achievement beyond the use of a fixed set of recipes. Thus, it lacks Writer’s Aid’s ability to satisfy user goals from almost any initial state using a variety of dynamically created courses of actions. Collagen’s collaborative strength is its ability to work with the user through a process, known (via a recipe library) to the system, leading to achievement of the user’s goal. The focus in Writer’s Aid is on another system capability important for collaboration, namely, the ability to plan for and carry out autonomously a complex task that otherwise would have to be done by the human, and integrating the activities of the system-partner with those of the user in a non-intrusive and efficient manner. Other work has explored the use of context in information retrieval. Watson [4] is intended to work with its user proactively downloading and suggesting information it regards as relevant to a document that the user is currently editing or viewing. Watson creates a search query based on the text and the structure of the document, but not related to any specific user request. However, the user study of Watson [4] evaluated the utility of information provided by Watson statically; it did not involve the system working “alongside” a user. As a result, the appropriateness of Watson’s search results in interactive use was not evaluated in that study. In contrast, Writer’s Aid takes seriously the fact that when users delegate to a system the task of finding information needed to complete a task (or satisfy a user’s goal), the usefulness of the system depends critically on the relevancy of the information retrieved by the system and on the results being available in a timely manner. Otherwise, the time it takes the user to sift through irrelevant information or the time spent waiting for the results may outweigh the time the user saves by not performing the search himself. These performance characteristics in Writer’s Aid are ensured by the system adopting the precisely specified user’s search goal and using information sources that are directly related to For instance, Writer’s Aid uses only the web and electronic sources readily available in the student dorm rooms and offices, so the system worked with the user’s support and inspiration, as well as the user’s scrutiny and sense of ownership. It was a reinvention of the collaborative process, rather than a software package. a well defined set of data items such as papers and bibliographic records. In the future, we plan to extend Writer's Aid to incorporate the context of a citation request for more efficient search and ranking of the results. Another direction we have started to explore is adding the user as a source of information about his or her own preferences and knowledge of relevance of various online collections to the subject of a paper. Such personalization tasks can be stated declaratively via a set of knowledge goals and satisfied by an action of querying the writer, when this information becomes necessary. This representation separates personalization of the interface from its overall architecture, making it more easily adjustable. It also leads to preference elicitation that occurs within the context of a particular task. 7. Conclusion We have presented a writer's assistant system that works collaboratively with a user, achieving the necessary flexibility of behavior through explicit representation, reasoning, and planning with respect to goals and domain knowledge. Collaborativeness is embodied in the system's commitment to shared goals of producing accurate, well-formed citations; a division of labor in which each participant contributes according to natural capabilities, pursuing all known avenues to accomplish those goals; and communication between the parties in both directions, the user providing query information and bibliographic choices to the system, the system providing query status and gathered information to the user. The use of planning technology to implement collaborative interfaces places new requirements on the knowledge representation and planning methods. We presented a set of extensions to classical planning representations and techniques to satisfy these requirements. In particular, the use of an expressive, yet precise and tractable formalism for knowledge representation, PSIPLAN, and the addition of hypothetical planning to integrate domain actions with sensing actions and interleaved execution, were crucial to the implementation of the collaboration. We conducted a laboratory user study to examine the effectiveness of the system. The results indicate the success of this particular interface and its implementation. Users characterized it as a useful and easy-to-learn tool that they would like to have for academic writing. 8. Acknowledgements The research reported in this paper was supported by National Science Foundation grants IRI-9618848 and IIS-9978343 to Harvard University. The authors thank Luke Hunsberger, Wheeler Ruml and Christian Lindig for their assistance in developing the system and for helpful comments on the paper, and all participants of the user study. 9. References
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/2252600/Shieber_WritersCollaborative.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 9161, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28929, "total-output-tokens": 10502, "length": "2e13", "weborganizer": {"__label__adult": 0.0005617141723632812, "__label__art_design": 0.003664016723632813, "__label__crime_law": 0.0005435943603515625, "__label__education_jobs": 0.1007080078125, "__label__entertainment": 0.0005879402160644531, "__label__fashion_beauty": 0.00044155120849609375, "__label__finance_business": 0.0010576248168945312, "__label__food_dining": 0.0005812644958496094, "__label__games": 0.00162506103515625, "__label__hardware": 0.0013675689697265625, "__label__health": 0.0009279251098632812, "__label__history": 0.001361846923828125, "__label__home_hobbies": 0.0003829002380371094, "__label__industrial": 0.0005950927734375, "__label__literature": 0.005115509033203125, "__label__politics": 0.0005307197570800781, "__label__religion": 0.0009889602661132812, "__label__science_tech": 0.2310791015625, "__label__social_life": 0.001003265380859375, "__label__software": 0.1485595703125, "__label__software_dev": 0.496826171875, "__label__sports_fitness": 0.00037550926208496094, "__label__transportation": 0.0007252693176269531, "__label__travel": 0.0004355907440185547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48533, 0.02165]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48533, 0.4801]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48533, 0.91882]], "google_gemma-3-12b-it_contains_pii": [[0, 1507, false], [1507, 6555, null], [6555, 13520, null], [13520, 20429, null], [20429, 23314, null], [23314, 29654, null], [29654, 36043, null], [36043, 43086, null], [43086, 48533, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1507, true], [1507, 6555, null], [6555, 13520, null], [13520, 20429, null], [20429, 23314, null], [23314, 29654, null], [29654, 36043, null], [36043, 43086, null], [43086, 48533, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48533, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48533, null]], "pdf_page_numbers": [[0, 1507, 1], [1507, 6555, 2], [6555, 13520, 3], [13520, 20429, 4], [20429, 23314, 5], [23314, 29654, 6], [29654, 36043, 7], [36043, 43086, 8], [43086, 48533, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48533, 0.03226]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
90175ed68eeb7e211647b13e86b5c4027b37dc83
[REMOVED]
{"Source-Url": "http://www.pietro.ferrara.name/2013_ICFEM.PDF", "len_cl100k_base": 11280, "olmocr-version": "0.1.53", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 51693, "total-output-tokens": 13599, "length": "2e13", "weborganizer": {"__label__adult": 0.0004832744598388672, "__label__art_design": 0.0005369186401367188, "__label__crime_law": 0.0005860328674316406, "__label__education_jobs": 0.0008974075317382812, "__label__entertainment": 0.0001373291015625, "__label__fashion_beauty": 0.0002334117889404297, "__label__finance_business": 0.0003786087036132813, "__label__food_dining": 0.0005559921264648438, "__label__games": 0.0032634735107421875, "__label__hardware": 0.001445770263671875, "__label__health": 0.0007572174072265625, "__label__history": 0.00045990943908691406, "__label__home_hobbies": 0.00014293193817138672, "__label__industrial": 0.0007996559143066406, "__label__literature": 0.0003724098205566406, "__label__politics": 0.0004363059997558594, "__label__religion": 0.0006275177001953125, "__label__science_tech": 0.1322021484375, "__label__social_life": 0.00011771917343139648, "__label__software": 0.009063720703125, "__label__software_dev": 0.84521484375, "__label__sports_fitness": 0.00048065185546875, "__label__transportation": 0.000782012939453125, "__label__travel": 0.0002810955047607422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49408, 0.03218]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49408, 0.51312]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49408, 0.85402]], "google_gemma-3-12b-it_contains_pii": [[0, 2449, false], [2449, 5832, null], [5832, 8165, null], [8165, 11157, null], [11157, 14422, null], [14422, 17120, null], [17120, 20293, null], [20293, 23836, null], [23836, 27081, null], [27081, 30511, null], [30511, 33330, null], [33330, 36054, null], [36054, 39209, null], [39209, 42656, null], [42656, 45724, null], [45724, 48958, null], [48958, 49408, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2449, true], [2449, 5832, null], [5832, 8165, null], [8165, 11157, null], [11157, 14422, null], [14422, 17120, null], [17120, 20293, null], [20293, 23836, null], [23836, 27081, null], [27081, 30511, null], [30511, 33330, null], [33330, 36054, null], [36054, 39209, null], [39209, 42656, null], [42656, 45724, null], [45724, 48958, null], [48958, 49408, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49408, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49408, null]], "pdf_page_numbers": [[0, 2449, 1], [2449, 5832, 2], [5832, 8165, 3], [8165, 11157, 4], [11157, 14422, 5], [14422, 17120, 6], [17120, 20293, 7], [20293, 23836, 8], [23836, 27081, 9], [27081, 30511, 10], [30511, 33330, 11], [33330, 36054, 12], [36054, 39209, 13], [39209, 42656, 14], [42656, 45724, 15], [45724, 48958, 16], [48958, 49408, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49408, 0.06044]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
0937c0ee920baba655e272fc4df084951301b29e
Contents 1 Introduction 4 1.1 Supported Operating Systems 4 1.2 System Requirements 4 2 New and Noteworthy 5 2.1 ADSP-SC58x/ADSP-2158x Processor Support Added 5 2.2 ADSP-BF70x Silicon Revision 1.0 Revised 5 2.3 Upgraded Development Environment 5 3 ADSP-SC58x and ADSP-2158x Processor Support 6 3.1 Developing for ADSP-SC58x/ADSP-2158x Processors 6 3.2 Drivers and Services for the ADSP-SC58x Processors 7 3.3 Creating Projects for ADSP-SC58x 8 3.4 Using Pre-load Files 8 3.5 Supported Emulators 9 3.6 Simulation of ADSP-21584 and ADSP-SC589 Processor cores 9 3.7 Debugging Multiple Cores 10 3.7.1 Core Option Settings in Debug Configurations 10 3.8 Debugging Only a SHARC+ Core 11 3.9 Creating Bootable LDR Files for ADSP-SC58x / ADSP-2158x 13 4 ADSP-BF70x Processor Support 14 4.1 ADSP-BF70x silicon revision 1.0 14 4.1.1 Utility ROM Improvements for ADSP-BF70x revision 1.0 14 4.2 Updated Silicon Anomaly Support 15 4.2.1 Assembler detection for STI-CLI silicon anomaly 19000010 15 4.2.2 Watchdog service workaround for silicon anomaly 19000011 15 4.2.3 Branch Predictor cleared during application startup for silicon anomaly 19000047 15 4.2.4 Branch Predictor disabled during ISRs for silicon anomaly 19000054 15 5 IDE Changes 16 5.1 Platform Changes Since 3.7 16 5.2 C/C++ Development Tools (CDT) Changes Since 8.0 16 5.3 New IDE Features Added Since CCES 1.2.0 16 5.4 Eclipse Features Removed Since CCES 1.2.0 17 5.5 Workspace Compatibility with Previous Versions of CCES 17 5.6 IDE Workspace Default Changed for CCES 2.0.0 18 6 Toolchain Updates 19 6.1 SHARC Compiler 19 6.1.1 Updated Language Standards Support 19 6.1.2 Universal-character-names in narrow string literals 19 6.1.3 Compiler error cc0137 19 6.1.4 Inlining support in C99 20 6.1.5 cc21k SHARC compiler driver switch changes 21 6.2 New Compiler Warnings 6.2.1 cc1486: invalid section qualifier 6.3 Run-Time Library Changes 6.3.1 Library performance optimizations 6.3.2 exit 6.3.3 rand (SHARC only) 6.3.4 Data cache invalidation (Blackfin only) 6.3.5 The %a conversion specifier 6.3.6 INTR_TAPC0_KEYFAIL renamed 6.3.7 Core-management functions for multi-core processors 6.4 LDF and Linking Related Changes 6.4.1 Implicit support for External memory sections in SHARC+ cores 6.4.2 32-bit SHARC+ PM data changes 6.4.3 SHARC+ Cache support (changes required for custom LDFs) 6.4.4 USE_L1_ONLY macro no longer has an effect 6.4.5 MEM_ASYNC sections renamed for ADSP-BF60x generated LDFs 6.5 Blackfin Assembler and Branch Instruction Encoding 7 Known Problems and Limitations 7.1 No System Reset 7.2 ICE-2000 JTAG Frequencies limited on ADSP-BF70x Rev 1.0 silicon 7.3 No SWD debug support for ADSP-SC58x/ADSP-2158x 7.4 GDB with OpenOCD with ICE-1000 or ADSP-SC584 7.5 Access to Uninitialized External Memory on ADSP-SC58x/ADSP-2158x Processors 7.6 Changing silicon revision on a ADSP-BF707 that contains SSL drivers added as source from the UI may not work if the driver is in ROM 7.7 OTP Programmer not visible through Examples Browser 7.8 Ensure the volatile type qualifier is used where necessary 7.9 Passing data between SHARC+ and ARM cores 7.10 No concurrent USB Host and Device stack capability for ADSP-SC58x Processors 7.11 "Target not available" 7.12 Relaunching Debug Sessions with GDB with OpenOCD or QEMU 7.13 Stdio and Multithreaded Libraries in the GNU ARM Toolchain 7.14 QEMU Memory Limitations 7.15 Other Known Issues 1 Introduction This document describes the changes for CrossCore Embedded Studio (CCES) 2.0.0. This release adds support for the ADSP-SC58x and ADSP-2158x processor families. 1.1 Supported Operating Systems This release of CCES is supported on the following operating systems: - Windows Vista Business, Enterprise, or Ultimate SP2 (32-bit only) - Windows 7 Professional, Enterprise, or Ultimate (32 and 64-bit) - Windows 8.1 Pro or Enterprise (32 and 64-bit) **Note** Windows Vista, Windows 7, and Windows 8.1 users may experience User Access Control (UAC) related errors if the software is installed into a protected location, such as Program Files or Program Files (x86). We recommend installing the software in a non-UAC-protected location. 1.2 System Requirements Verify that your PC has these minimum requirements for the CCES installation: - 2 GHz single core processor; 3.3GHz dual core or better recommended - 1 GB RAM; 4GB or more recommended - 2 GB available disk space - One open USB port **Notes** - A faster disk drive decreases the build time, especially for a large amount of source files. 4GB of RAM or more will substantially increase the performance of the IDE. - For proper viewing of documentation under Windows Internet Explorer 9 or greater is recommended. 2 New and Noteworthy 2.1 ADSP-SC58x/ADSP-2158x Processor Support Added The ADSP-SC58x/ADSP-2158x processors are supported in CCES 2.0.0. These processors have the following key features: - Two SHARC+ cores (except ADSP-SC582 which only has one SHARC+ core); a proprietary toolchain is included to support these cores. - ARM Cortex-A5 core (ADSP-SC58x processors); a GNU toolchain is included to support this core, including GDB with QEMU (Simulator) and OpenOCD (Emulator). - Functionality specific to Analog Devices processors is detailed in the Analog Devices ARM Toolchain Manual which can be found from the Help > Help Contents menu of the IDE. - Open source documentation relevant to the tools is also provided with the online help. - The source files for the GNU toolchain and related open source software are available from http://analog.com/opensource. - High-Performance Floating-Point FFT Accelerator (FFTA), accessible from the ARM and SHARC+ cores. Details on how to use the accelerator in your application can be found in Using the SHARC+ FFTA Accelerator within the on-line help. For more details, see ADSP-SC58x and ADSP-2158x Processor Support, later in this document. 2.2 ADSP-BF70x Silicon Revision 1.0 Revised ADSP-BF70x silicon revision 1.0 has been revised in CCES 2.0.0 to include workarounds for silicon anomalies that have been characterized since CCES 1.2.0. For details, refer to ADSP-BF70x Processor Support, later in this document. 2.3 Upgraded Development Environment The IDE has been upgraded from Eclipse 3.7.2 found in CCES 1.2.0 to Eclipse 4.4.0 in CCES 2.0.0. As the culmination of more than 4 years of development this update brings along a LOT of new functionality and features. For more details, see IDE Changes, later in this document. 3 ADSP-SC58x and ADSP-2158x Processor Support 3.1 Developing for ADSP-SC58x/ADSP-2158x Processors The SHARC+ cores in these processors represent a significant advancement upon the earlier SHARC cores in earlier SHARC processors. There are a number of significant changes of which you should be aware: - The ARM core is the primary core, in ADSP-SC58x processors, and is the core that boots. The boot process follows the advanced, flexible boot architecture found on Blackfin processors. - There is a new memory map, supporting the ARM core and the SHARC+ cores. Custom LDFs will need to be replaced. - The SHARC+ cores introduce a number of performance improvements to the architecture: - Hardware support for double-precision floating-point operations. - A new cache architecture, with instruction and data caches. - A Branch Target Buffer. - The SHARC+ cores have a longer pipeline: - A given sequence of instructions may have different stalls on SHARC+, compared to SHARC. The compiler will schedule instructions to avoid stalls, where possible; this can lead to minor variances in arithmetic results relative to SHARC, when commutative operations are re-ordered. - Some DSP library functions included with CrossCore Embedded Studio may require more cycles on SHARC+, compared to SHARC. Please contact Analog Devices for assistance if you encounter difficulties. - The SHARC+ architecture accepts SHARC assembly language, but is not binary-compatible; SHARC assembly code, C and C++ must be rebuilt for correct execution on SHARC+. The SHARC+ core supports word-addressed memory spaces, but is primarily a byte-addressed architecture: - Peripherals deal with byte-addressed spaces. SSL/DD device drivers are available to assist in development. - RTOS and other middleware products operate in byte-addressed space. - The compiler supports byte- and word-addressed spaces, and has support for interoperability between them. Refer to Using Byte Addressing in the Compiler Manual for SHARC Processors. - SHARC assembly code may need changes to operate correctly when dealing with byte addresses. ### Supported silicon revisions CCES 2.0.0 supports silicon revisions 0.0, 0.1 and 1.0 of ADSP-SC58x and ADSP-2158x processors. Please note: - Support for silicon revision 0.0 is deprecated; it will not be supported in CCES 2.1.0 and later releases. - Silicon revision 1.0 is currently equivalent silicon revision 0.1. Later releases of CCES 2.1.0 will refine the support for silicon revision 1.0, to disable workarounds for silicon anomalies that no longer apply. ### 3.2 Drivers and Services for the ADSP-SC58x Processors The SSL 2.0 driver model provides support for almost all of the the ADSP-SC58x peripherals. For a complete list of supported peripheral please see the ADSP-SC58x API Reference for both the SHARC+ and Cortex-A cores in the online help. The main features of this driver model are: - Small footprint and minimal cycle counts - Easy to use/modify - Interrupts and DMA have been abstracted - Switching between Interrupt and DMA Mode supported via API - Works with or without a RTOS (uC/OS-II, uC/OS-III and non-RTOS are supported) Three Programming Models are supported - Non-Blocking Mode • Blocking Mode • Callback Mode For more information on these programming models, see the Low-Level Driver API Reference within the Device Drivers User Guide in the online help. The Cross Core Embedded Studio environment supports and simplifies the use of the SSL 2.0 model in the following ways: • Adding Device Drivers and Services sources through the System Configuration Manager (system.svc) Add-In manager. • Pin multiplexing code generation via a CCES GUI • SRU (system routing unit) Code Generation via a CCES GUI • Example Manager • Code Sketches also available via the Example Manager Examples for supported peripherals are found in the ADSP-SC58x Board Support Package (BSP), available separately from www.analog.com. • Power On Self Test (POST) • Working examples for on and off-chip peripherals **USB Driver PMU usage** The USB enumeration process, for both host and device mode, may require a delay of precisely 1 msec. On the Cortex-A the USB driver uses the PMU (Performance Management Unit) to precisely calculate the delay. The USB driver will reset the PMU and then use its cycle counting registers. The PMU is a shared resource and any other part of the application that requires the PMU must be aware of the USB enumeration process and its dependency on the PMU. ### 3.3 Creating Projects for ADSP-SC58x To create a project within the CCES IDE for ADSP-SC58x you should choose the SHARC family, and then select your target processor, e.g. ADSP-SC589. The IDE will then let you choose to create projects for each (or all) of the cores. Whether you are just targeting the ARM core, one of the SHARC+ cores, or all three cores, you can choose which projects you would like to create. ### 3.4 Using Pre-load Files External memory needs to be configured appropriately before you can load your application into it. When your application boots, this is done through initcodes; when you load your application into your target using the debugger, this can be done by the IDE automatically for simple processors. For heterogeneous processors such as the ADSP-SC58x processors, more flexibility is required. This release of CCES 2.0.0 introduces the concept of pre-load files, which are equivalent to initcodes, but used during the debugging phase of development. These pre-load files are only used for the ADSP-SC58x and ADSP-2158x processors currently. You can find the pre-built binary files in SHARC\ldr. The projects used to create the pre-load binary files are located in SHARC\ldr\init_code\SC589_Init. The main purpose of these pre-load files is to set up clocks and DMC settings so that the debugger is able to load your application to external memory. If you wish to change this setup, you can update the source files, rebuild the pre-load executable, and place the binary in the SHARC\ldr folder in place of the existing file. The master core is generally the only core that will need a pre-load so debug configurations will automatically fill the pre-load file in for you as one of the applications that will be loaded. For an ADSP-SC58x processor, the pre-load will be part of the ARM Core 0 project. For an ADSP-2158x processor the pre-load will be part of the first SHARC+ core project. In most cases, you will want your project executables to have the options set as described in Core Option Settings for Debug Configurations, as described under Debugging Multiple Cores, below. See also Access to Uninitialized External Memory on ADSP-SC58x/ADSP-2158x Processors in the Known Problems and Limitations section for additional information on pre-load files. Note: Do not use preload files when building bootable LDR files. See also the the Init Code section in the Loader and Utilities Manual for additional information. 3.5 Supported Emulators Only the ICE-1000 and ICE-2000 emulators can be used to debug ADSP-SC58x and ADSP-2158x processors. 3.6 Simulation of ADSP-21584 and ADSP-SC589 Processor cores CCES 2.0.0 has the following options for simulating ADSP-2158x/ADSP-SC58x processor cores: - SHARC+ cores: - Functional simulation. - Cycle-accurate simulation. - ARM Cortex-A5 core: - Functional simulation, using the open source QEMU simulator. The SHARC+ Cycle Accurate Simulator is accurate to within a tolerance of +/- 3%, compared to silicon. The following cases are known to differ from the silicon: - `cjump/rframe` to use of I6/I7 register. - Some inaccuracies in stalls during read/write to core MMR registers. - Write to `CCNTR` to LCE based non-branch instruction. - Floating point compute or any multiplier operation followed by move of the result to any register outside the relevant execution unit. For more information on these new simulators please see the Simulator User's Guide in the online help. ### 3.7 Debugging Multiple Cores There is heterogeneous (ARM Cortex-A5 and 2 SHARC+ cores) debugging support when debugging applications with the CrossCore Debugger and an emulator. When the processor starts execution, the SHARC+ cores are held in IDLE until enabled by the application running on the ARM core. This means: - If the application running on Core 0 does not explicitly enable the other cores, the SHARC+ cores will not run their applications. - When you load your application into the processor using the debugger and run to the start of `main()` on Core 0, the other cores will still be in IDLE. The run-time libraries include the `adi_core_enable()` function to release other cores from IDLE. When you create new projects for ADSP-SC58x or ADSP-2158x processors and elect to generate template code for your projects, the IDE populates the `main()` function of the booting core with sample code that enables the other cores. #### 3.7.1 Core Option Settings in Debug Configurations In most cases, you will want your project executables to have the following options set: <table> <thead> <tr> <th>Application</th> <th>&quot;Reset&quot; Option</th> <th>&quot;Run after Load&quot; Option</th> <th>&quot;Reset&quot; Option</th> <th>&quot;Run after Load&quot; Option</th> </tr> </thead> <tbody> <tr> <td></td> <td><strong>Emulator</strong></td> <td><strong>Simulator</strong></td> <td></td> <td></td> </tr> <tr> <td>Pre-load on booting core</td> <td>Set</td> <td>Set</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Application on booting core</td> <td>Unset</td> <td>Set</td> <td>Set</td> <td>Unset</td> </tr> </tbody> </table> These options are accessed in the ‘Modify configuration and continue’ window during session startup. Select the appropriate core/dxe and click ‘Edit…’ With this configuration, when you launch the emulator debug session, the pre-load file will run to completion, the main application for the booting core will be loaded, and it will run to the start of main(). The other cores will be loaded and halted. At this point, you can use Run->MP Resume to run the applications on your processor; when the booting core invokes adi_core_enable(), the respective core will start executing its application. If "Run after load" is "Set" for the non-booting cores, they will attempt to execute their applications before the booting core executes adi_core_enable(); this will be indicated in the debugger by showing those cores as Running, when the booting core halts at main(). In this situation, you can halt those cores, and then use Run->MP Resume, to run all cores. This behavior is due to a limitation in the processor's Reset handling; see No System Reset under Known Problems and Limitations, for details. For debugging multiple cores with the simulator, refer to the Simulator User's Guide, section Features -> Simulator Sessions -> Handling Multiple DXE Files. ### 3.8 Debugging Only a SHARC+ Core For convenience, you may only want to debug a SHARC+ core with the emulator and not run an ARM application each time to enable your SHARC+ core. This can be achieved by following these steps: Create an XML file that has the following contents: ``` <?xml version="1.0" standalone="yes" ?> <custom-cces-proc-xml xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:noNamespaceSchemaLocation="\Analog Devices\CrossCore Embedded Studio 2.0.0\System\ArchDef\ADSP-custom-board.xsd" processor-family="sharc" file="example_custom_board_support.xml"> <custom-register-reset-definitions> <register name="RCU0_MSG_SET" reset-value="0x00180000" core="Common" /> </custom-register-reset-definitions> </custom-cces-proc-xml> ``` In the debug configuration, click on the Custom Board Support tab and check **Enable customizations**. Then point to the XML file that you have just created. With this XML configuration in place, you do not need to unset the **Run after load** option on the SHARC+ core. Now upon connecting to just the SHARC+ core you should be halted at main(). Since you will only be debugging one core, there is no need to use any of the MP run control options such as MP Resume in this case. 3.9 Creating Bootable LDR Files for ADSP-SC58x / ADSP-2158x Multi-core bootable LDR files for ARM and SHARC+ cores can be created for the ADSP-SC58x / ADSP-2158x processors using the CCES elfloader tool described in the online help. Refer to **Loader for ADSP-SC58x / ADSP-2158x Multicore Processors** in the **Loader and Utilities Manual** for details. The ADSP-SC58x / ADSP-2158x processors have on-chip L2 boot ROMs for ARM and SHARC+ cores. With the boot kernel code already resident in the on-chip boot ROMs, there is no need to load a boot kernel from the LDR files as was required when booting applications for prior SHARC processor families. ADSP-SC58x / ADSP-2158x LDR files contain a series of bootable blocks in the format expected by the modern on-chip boot ROM developed by Analog Devices. This LDR file format is different than the ones used for existing SHARC processors. Refer to **Boot ROM and Booting the Processor** in **ADSP-SC58x SHARC+ Processor Hardware Reference** for details. 4 ADSP-BF70x Processor Support 4.1 ADSP-BF70x silicon revision 1.0 CCES 2.0.0 supports silicon revisions 0.0 and 1.0 of ADSP-BF70x processors, with silicon revision 1.0 being the default. Silicon revision 1.0 includes a number of significant silicon fixes including: - Improved booting functionality. - Support for misaligned data accesses. - Corrections to LUT, MSI and Pinmuxing. - Corrections to the Branch Target Buffer. - An updated Utility ROM 4.1.1 Utility ROM Improvements for ADSP-BF70x revision 1.0 Silicon revision 1.0 of ADSP-BF70x processors includes the following updates to the Utility ROM: - The `libdsp/libcc/libc` components have been further optimized for the Blackfin+ core. - uC/OS-III has been updated to use version 3.04.04 of Micrium's product. - `LIBDRV` was updated to add new drivers for RSI and HADC peripherals. A new set of libraries, LDF symbol maps, include files and DXE files for simulation debugging have been included in CCES 2.0.0 to support using the ADSP-BF70x parts silicon revision 1.0 utility ROM. The IDE-based simulator debug sessions and the chipfactory.exe command-line simulation automatically load the correct ROM DXE files based on the silicon revision that the application being simulated was built for. ⚠️ Warning Before running an executable on an ADSP-BF70x processor, ensure that the executable has been built for the correct silicon revision: - Applications that use the utility ROM and are built for 0.0 silicon will not function correctly on 1.0 silicon. Applications that use the utility ROM and are built for 1.0 silicon will not function correctly on 0.0 silicon. Applications that do not use the utility ROM, or are built for revisions *any or none* are unaffected by this update to the utility ROM. ### 4.2 Updated Silicon Anomaly Support A number of silicon anomalies have been addressed in silicon revision 1.0, and are no longer enabled by default. Refer to the Analog Devices Silicon Anomaly List for your respective processor, for details. #### 4.2.1 Assembler detection for STI-CLI silicon anomaly 19000010 The CCES 2.0.0 Blackfin assembler will issue new anomaly detection warning, ea5526, for adjacent STI and CLI instructions. The warning is to indicate where a workaround is required for silicon anomaly 19000010 “STI Directly Before CLI Does Not Enable Interrupts”. > Compiler and runtime library workarounds for 19000010 were previously added in CCES 1.1.0. #### 4.2.2 Watchdog service workaround for silicon anomaly 19000011 The watchdog system-service provided in the CCES 2.0.0 has been updated to incorporate a workaround for silicon anomaly 19000011 "The WRDO Bit in WDOG_CTL is Erroneously Cleared Under Certain Conditions”. #### 4.2.3 Branch Predictor cleared during application startup for silicon anomaly 19000047 The branch predictor can operate incorrectly if the predictor learns from a control-flow instruction within an initcode, then the subsequent application happens to have a different control-flow instruction mapped to the same location. To work around this situation, the startup code in CCES 2.0.0 flushes the branch predictor’s learned information at the beginning of the application. #### 4.2.4 Branch Predictor disabled during ISRs for silicon anomaly 19000054 Silicon anomaly 19000054 describes a situation where the branch predictor's operation can lead to a self-nested interrupt returning to User Mode instead of to the preceding interrupt level. To avoid this situation, the interrupt dispatchers in CCES 2.0.0 disable the branch predictor during interrupt service routines. 5 IDE Changes The IDE has been upgraded from Eclipse 3.7.2 found in CCES 1.2.0 to Eclipse 4.4.0 in CCES 2.0.0. As the culmination of more than 4 years of development this update brings along a LOT of new functionality and features. Please refer to the individual release notes for the major Eclipse components for specifics as to these new features: 5.1 Platform Changes Since 3.7 - New and Noteworthy in Eclipse 4.0 (Helios) - New and Noteworthy in Eclipse 4.1 (Indigo) - New and Noteworthy in Eclipse 4.2 (Juno) - New and Noteworthy in Eclipse 4.3 (Kepler) - New and Noteworthy in Eclipse 4.4 (Luna) 5.2 C/C++ Development Tools (CDT) Changes Since 8.0 - New and Noteworthy in CDT 8.1 - New and Noteworthy in CDT 8.2 - New and Noteworthy in CDT 8.3 - New and Noteworthy in CDT 8.4 5.3 New IDE Features Added Since CCES 1.2.0 - Native GIT support via EGit - Support for remote development via the Remote System Explorer - Pipeline Viewer 5.4 Eclipse Features Removed Since CCES 1.2.0 - Native CVS support via Platform-CVS 5.5 Workspace Compatibility with Previous Versions of CCES Due to the move from Eclipse 3.7 to 4.4 it may not be possible to use the same workspace with CCES 2.0 and earlier versions of CCES. When you point CCES 2.0 to a workspace created by CCES 1.x you will see the following message. We recommend keeping separate workspaces for multiple versions of CCES. 5.6 IDE Workspace Default Changed for CCES 2.0.0 - The default workspace path is more Linux friendly `$home/cces/2.0.0`. For example, on Windows the default workspace path is `C:\Users\${username}\cces\2.0.0` and on Linux the default workspace path is `/home/${username}/cces/2.0.0`. 6 Toolchain Updates 6.1 SHARC Compiler The CCES 2.0.0 SHARC compiler has been updated to provide improved language standards compliance. These changes bring the SHARC compiler in line with the Blackfin compiler, which was updated for CCES 1.1.0. 6.1.1 Updated Language Standards Support The compilers accept many features of the ANSI/ISO 14882:2011 Standard (C++11), when the -c++11 switch is used. Note that the underlying run-time library conforms to ANSI/ISO 14882:2003. When the -c++ switch is used, the compilers conform to the ANSI/ISO 14882:2003 Standard. The -g++ switch may be used with the compilers. It directs the compilers to support many of the GNU G++ extensions to the C++ language. The -g++ switch may be used in conjunction with either the -c++ or -c++11 switches. 6.1.2 Universal-character-names in narrow string literals The way the SHARC compiler in CCES 2.0.0 (and the Blackfin compiler as of CCES 1.1.0) handles universal-character-names in narrow string literals has changed. Previously the Unicode value of a universal-character-name appearing in a narrow string literal was truncated (with a warning) to the least-significant byte and represented as a single character in the value. Now the value is the UTF-8 variable width encoding representation of the Unicode character. For example, the string "\u20AC" (the Euro symbol) was previously equivalent to "\0xAC"; it is now equivalent to "\xE2\x82\xAC". 6.1.3 Compiler error cc0137 The SHARC compiler in CCES 2.0.0 (and the Blackfin compiler in CCES 1.1.0) raises error cc0137 for uses of decrement or increment operators for the result of a cast in a single expression. Previous versions of the Blackfin compiler and the CCES 1.1.0 SHARC compiler issue a warning for this problem. For example the following source will cause new error cc0137. ```c void func(void *buffer, unsigned short us, int len) { for (int i=0; i<len; i++) *((unsigned short *)buffer)++ = us; } ``` Correct the error by performing the cast in a separate expression from the decrement or increment. For the example above the correction is shown below. \begin{verbatim} example cc0137 correction void func(void *buffer, unsigned short us, int len) { unsigned short *usPtr = (unsigned short *)buffer; for (int i=0; i<len; i++) { *usPtr++ = us; } } \end{verbatim} 6.1.4 Inlining support in C99 The support for inline functions that do not specify a storage class has changed in the CCES 2.0.0 SHARC compiler (and CCES 1.1.0 Blackfin compiler). Previously such inline functions were implicitly treated as if they had static storage class. Such functions are now treated in a more standard conforming way. The compiler support for inlining in C99 is described below. (Be aware that C++ also has "extern inline"; this section specifically refers to behavior in C99 mode.) \textbf{inline (with static storage class)} \begin{verbatim} static inline void func() { int main() { func(); } \end{verbatim} - A static instance of "func" is generated when a call is not inlined or the address of the function is taken - the call and the address refer to the static instance. - No instance of "func" is generated if all calls are inlined and the address of the function is not taken. - The behavior is the same as C89 mode. - The behavior is the same as gcc with C99 enabled. \textbf{inline (with no storage class specifier)} \begin{verbatim} inline void func() { int main() { func(); } \end{verbatim} - If a call is not inlined, then function "func" is a static instance to which the call refers. - If the address of func is taken, the address and all calls to func within the translation unit instead refer to an externally defined function. - If all calls are inlined, then no static instance is generated. The behavior is the same as C89 mode, apart from in c89 taking the address refers to the local instance. - The behavior differs from gcc with C99 enabled. In C99 mode there is no local instance of the function generated. All references are to a function defined elsewhere. **inline (with extern storage class specifier)** ```c extern inline void func() { } int main() { func(); } ``` - An external definition of the function is always created, this could lead to multiply defined symbols if it is declared in a header file. - If all calls are not inlined, then the call refers to the external definition in this module. - If the address is taken, then the call refers to the external definition in this module. - This behavior differs from C89 mode. In this mode there is no definition of the function created. Taking the address or not inlining a call will lead to an external reference being created. - The behavior is compatible with gcc with C99 enabled. **Treatment of double-precision floating-point denormal values** The hardware support for double-precision floating-point arithmetic on the ADSP-215xx and ADSP-SC58x family of processors flushes denormal input values to zero before each operation. In contrast, the run-time library support used with earlier SHARC parts without double-precision hardware support does not flush denormals to zero. Therefore when denormals are present, the output given by programs built for ADSP-215xx and ADSP-SC58x processors, and those built for earlier SHARC parts, may not match exactly. ### 6.1.5 cc21k SHARC compiler driver switch changes - `-aligned-stack` switch is no longer supported and will be ignored if used. The stack uses of the compiler will by default attempt to retain double-word alignment assuming the stack is aligned at the start of a function unless the `-no-aligned-stack` switch is used. ### 6.2 New Compiler Warnings #### 6.2.1 cc1486: invalid section qualifier The SHARC and Blackfin compilers will now issue warning `cc1486` for uses of `#pragma sections` and `#pragma default_section` with invalid or unknown section qualifiers. The invalid qualifier will be ignored as before and compilation will continue. Fix this problem by selecting a correct section qualifier or by deleting the invalid one if it is not required. SHARC valid section qualifiers are NO_INIT, RUNTIME_INIT, ZERO_INIT, DOUBLE32, DOUBLE64, DOUBLEANY, SW (for VISA parts code only), NW (for VISA parts code only), DMAONLY, PM and DM (for data only). Blackfin valid section qualifiers are NO_INIT, RUNTIME_INIT, ZERO_INIT, DOUBLE32, DOUBLE64 and DOUBLEANY. 6.3 Run-Time Library Changes 6.3.1 Library performance optimizations A number of functions in the SHARC library have received performance improvements, as they are common to SHARC and SHARC+, and have been optimized to avoid SHARC+ stalls. In some cases, these performance improvements may result in minor changes in computed values. Several SHARC functions have increased in code size, to overcome stalls. 6.3.2 exit The behaviour of exit() has been rationalized between architectures to provide a consistent interface. This also removes any differing behaviour with threaded/non-threaded applications. The new behaviour is now as follows: _exit(int val) saves val to _exit_value and terminates the application (jumping to __lib_prog_term), without running any atexit handlers. exit(int val) invokes any atexit() handlers before calling _Exit(val) to save val to _exit_value and terminate the application. 6.3.3 rand (SHARC only) The rand implementation on SHARC processors has been replaced with an implementation which uses a 64-bit seed to give a period of > 2^32. This will result in a different series of values returned from rand() compared to previous releases, but reduces the possibility of repeating patterns. 6.3.4 Data cache invalidation (Blackfin only) The library functions dcache_invalidate() and cache_invalidate() now always invalidate the data caches for all Blackfin parts by modifying configuration bits. This approach is significantly faster than explicitly flushing portions of the cache as was done on some Blackfin parts in previous releases. This performance improvement means that the caches can no longer be invalidated individually, with the exception of BF70x parts where cache B can be invalidated without also invalidating bank A. Any other options will invalidate both cache banks, the equivalent of calling dcache_invalidate_both(). The behavior of instruction cache invalidation remains unchanged. 6.3.5 The %a conversion specifier In this release, the support for the %a conversion specifier conforms to the description in the C99 standard (ISO/IEC 9899). Compared to previous releases, you may notice the following differences in behavior when printing values using this specifier: – If an application is built with the switch -double-size-64, then the %a conversion specifier will output arguments as IEEE double precision values, regardless of whether the length modifier L has been specified. Previously, if neither a precision nor the L length modifier was specified, the %a conversion code would format arguments as a float and would display no more than 6 digits after the decimal point. For example: ```c printf("%a\n",0.1L); ``` According to the C99 standard, this statement would display the text `0x1.999999999999ap-4`, but previous releases would display `0x1.999999p-4` instead. – A formatted value would not be rounded if the precision was specified as zero. For example: ```c printf("%.0a\n",1.5); ``` According to the C99 standard, this statement would display the text `0x1p+1`, but previous releases would display `0x1p+0`. **Note** There shall be no different behavior if the %a conversion code is used to read a string that represents a floating-point value and to convert it to binary. 6.3.6 INTR_TAPC0_KEYFAIL renamed The ADSP-BF70x parts def-header include files have been changed to rename macro INTR_TAPC0_KEYFAIL to INTR_TAPC_KEYFAIL. A definition of INTR_TAPC0_KEYFAIL will only be available if macro _INCLUDE_LEGACY_SYSREG_NAMES is defined before including the def-header. 6.3.7 Core-management functions for multi-core processors The run-time libraries for Blackfin and SHARC processors include functions to identify the current core, and to enable other cores. These functions are: - adi_core_enable() - adi_core_id() For details, refer to the library manual for the respective processor: - C/C++ Library Manual for SHARC Processors - C/C++ Compiler and Library Manual for Blackfin Processors 6.4 LDF and Linking Related Changes 6.4.1 Implicit support for External memory sections in SHARC+ cores In the LDFs for previous SHARC parts, implicit support was provided for some external memory input sections (seg_ext_data, seg_sdram, seg_ext_code) when SDRAM was not enabled. In SHARC+ LDF's these input memory sections are only available when SDRAM is enabled via the "Use external memory" checkbox in the LDF configuration, or the USE_SDRAM macro in the default non-generated LDFs. 6.4.2 32-bit SHARC+ PM data changes The LDFs for SHARC+ cores make use of a linker feature that allows sections of different types such as DM or PM to be mapped into a common byte-addressed memory segment, which means that memory does not need to be manually partitioned into segments for the various types. However, that feature has a limitation: it assumes that each section type corresponds to a particular memory width, for example 32 bits for DM and 48 bits for PM. This contrasts with mapping a section to a segment of the same type, where the section will assume the segment’s width. For example, a PM segment can have width 32 or 48. As a consequence, 'seg_pmda' sections with type PM, which were mapped to PM segments with width 32 on previous SHARC parts, would wrongly end up with width 48 when mapped to a byte-addressed segment. Therefore, for SHARC+ cores such as those in the ADSP-SC589, the compiler and runtime libraries declare word-addressed 'seg_pmda' sections as type DM rather than PM to ensure that they end up with width 32. 6.4.3 SHARC+ Cache support (changes required for custom LDFs) The CCES Run-Time Library now provides support for enabling instruction and data (DM/PM) caches on SHARC+ cores, using LDF symbols for configuration. Corresponding support is provided in the default generated and non-generated LDFs, by default each cache is enabled using 16KB of cache space. Custom LDFs will minimally need to define the following LDF symbols in order to link successfully. This will leave the caches disabled, so no memory layout alterations are required. ```c __ldf_icachesize = 0xffffffff; __ldf_dmcachesize = 0xffffffff; __ldf_pmcachesize = 0xffffffff; ``` 6.4.4 USE_L1_ONLY macro no longer has an effect It is no longer possible to disable the use of any memory beyond L1 by defining the LDF macro USE_L1_ONLY. 6.4.5 MEM_ASYNC sections renamed for ADSP-BF60x generated LDFs The Startup/LDF addin for the ADSP-BF60x parts family will rename the four ASYNC memory sections when the LDF is regenerated using CCES 2.0.0. The changes are to have the numbers used match the published memory-map and the non-generated LDFs, they were in reverse order previously. Any user custom additions in generated LDFs using these sections (MEM_ASYNC0, MEM_ASYNC1, MEM_ASYNC2 and MEM_ASYNC3) may require corresponding changes to maintain the same memory usage seen prior to the change. 6.5 Blackfin Assembler and Branch Instruction Encoding Some inconsistencies in the Blackfin assembler and linker support for JUMP and CALL instructions have been resolved. This may mean that some existing assembly code will receive different encodings. - When building for ADSP-BF7xx processors, JUMP and CALL instructions (without the .X suffix) that target labels will be expanded to 64-bit encodings with a 32-bit target if necessary, instead of triggering an out-of-range linker error. - The -jcs2l linker switch is no longer needed to enable expansion to 64-bit branches. (It is still needed to enable expansion to indirect branches via the P1 register on ADSP-BF5xx and ADSP-BF6xx processors.) • A JUMP (without suffix) to a numeric offset in the -0x1000000..0xFFFFFE JUMP . L range but outside the -0x1000..0xFFE JUMP.S range will be encoded as a 32-bit instruction, whereas with prior releases of CCES it would have been encoded as a 64-bit instruction. • A CALL . L instruction has been added to allow users to explicitly select the 32-bit encoding with 24-bit target (cf. JUMP . L). An out-of-range error results if the target is outside the -0 x1000000..0xFFFFFE range. • In disassembly output, relative branch instructions are now always printed with the appropriate suffix: .S for 16-bit encodings, .L for 32-bit encoding, and .XL for 64-bit encodings. 7 Known Problems and Limitations 7.1 No System Reset Currently only a core reset is supported on the ADSP-SC58x/ADSP-2158x (Rev 0.1 silicon or older) and the ADSP-BF70x (Rev 0.0 silicon) processors, which has shown limitations when peripherals are running at the time of a core reset. There may be cases where you run an example and then reload to run the example a second time and you get exceptions. This could be due to the peripheral interrupt being serviced at the wrong time causing an exception. In order to fix this, identify the peripheral that is causing the issue and reset that peripheral in a pre-load file. See the Using Pre-load Files section of this document for more information on how to use them. The other option would be to use the hard reset on the target board in between running examples. Using Engineer Zone or sending a private support request is also a good option so that we are aware of the issue and can add it to the default pre-load files for the upcoming release. The nature of the core-only resets on the ADSP-SC58x and ADSP-2158x processors means that a core has to be running in order for the reset operation to take effect, when that reset is triggered by another core - if a core is halted by the emulator during the reset operation, the reset operation has no effect on the halted core. For this reason, the emulator sets all cores running after an application is downloaded to the processor from the IDE. Beginning with Rev 1.0 silicon, the ADSP-BF70x supports system reset so the issues associated with a core reset on this processor will no longer be present. 7.2 ICE-2000 JTAG Frequencies limited on ADSP-BF70x Rev 1.0 silicon On silicon revision 1.0 of the ADSP-BF70x processor, the ICE-2000 will only work up to 23 MHz. The 46 MHz JTAG frequency will not work properly. 7.3 No SWD debug support for ADSP-SC58x/ADSP-2158x Currently the Target Configurator allows a platform to be created to allow SWD debugging(instead of JTAG) for ADSP-SC58x/ADSP-2158x processors but SWD debugging will not work. JTAG is the only supported debug method for these processors. 7.4 GDB with OpenOCD with ICE-1000 or ADSP-SC584 When using GDB with OpenOCD and the ICE-1000 emulator, ensure the ICE-1000 is selected in the ‘Target’ tab when setting up a Debug Configuration. The configuration defaults to ICE-2000. When using GDB with OpenOCD with the ADSP-SC584 EZ-Board, ensure ADSP-SC584 is selected in the ‘Target’ tab when setting up a Debug Configuration. The configuration defaults to ADSP-SC589 EZ-Board. 7.5 Access to Uninitialized External Memory on ADSP-SC58x/ADSP-2158x Processors Access to a disabled memory - including speculative accesses - can result in a hang of the system and the only fix is to do a hard reset on the target board to recover from the hang. The following memories are affected: <table> <thead> <tr> <th>Memory</th> <th>Affected on 0.0, 0.1 silicon</th> <th>Affected on 1.0 silicon</th> <th>SMPU Available</th> </tr> </thead> <tbody> <tr> <td>DMC0</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>DMC1</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>SMC</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>PCIe</td> <td>Yes</td> <td>Yes</td> <td>Yes</td> </tr> <tr> <td>SHARC+ L1 via multi-processor space</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> <tr> <td>Flash</td> <td>Yes</td> <td>No</td> <td>No</td> </tr> </tbody> </table> There are two workarounds for this problem: *Enable the memory:* If the memory is enabled, the memory device will respond to the access and the hang will not occur. Configure the Secure Memory Protection Unit: If you do not intend to use the memory, you can configure the memory's corresponding SMPU (where available), to disable speculative reads, and to disallow all access to the memory space. Accesses will be rejected by the SMPU and not passed onto the disabled memory device, so no hang will occur. Apply this configuration - whether enabling the memory, or configuring the SMPU - in the preload and/or initcode of your project. This will ensure that the workaround is applied before your application starts running, whether you are loading your application through the IDE, or booting it from a peripheral. See Using Pre-Load Files in this document for further details. 7.6 Changing silicon revision on a ADSP-BF707 that contains SSL drivers added as source from the UI may not work if the driver is in ROM If an application for ADSP-BF707 contains the sources to the drivers that are in ROM (SPI, SPORT, UART, TWI, RSI, HADC) which were added via the SSL plugin, changing silicon revision in the project may cause errors because the UI might not define the appropriate macros to indicate that the driver used is not in the ROM. It is recommended that applications remove the driver sources, change silicon revision and then add the driver sources again. 7.7 OTP Programmer not visible through Examples Browser The OTP Programmer example which is needed when using the Command Line Device Programmer (CLDP.exe) with the "-device otp" switch is not visible through the Examples Browser. A supplied dxe that can be used with CLDP.exe can be found in the CrossCore Embedded Studio installation directory under SHARC/Examples/Device_Programmer. To view or rebuild this project, "Import an existing CCES project" from the Welcome screen, then browse for the directory. 7.8 Ensure the volatile type qualifier is used where necessary Compiler updates and improvements included in any CCES release can expose latent bugs in application software for the first time. For instance the MemCopyArrayMode example found in many Blackfin BSP products failed for the first time when built with CCES 1.1.0 in release configuration. The problem was tracked down to a pre-existing bug in the MemCopyArrayMode example. The example had omitted a necessary use of a volatile type qualifier on a variable declaration, necessary because the variable is updated in a callback ISR function. Please review the CCES help topic "The volatile Type Qualifier" and ensure that your applications make use of volatile where necessary. 7.9 Passing data between SHARC+ and ARM cores CrossCore Embedded Studio 2.0.0 provides MCAPI drivers to assist applications in passing data between the cores of the processor. When passing data between the cores, be aware that some data types may vary in size between the ARM and SHARC+ cores. In particular, be aware of: - **C enumerations.** For the SHARC+ processor, an enumeration is declared to be of type `int`. For bare metal ARM applications, the GNU GCC Compiler defaults to using the `-short-enum` switch. This uses the smallest possible data type to store the enumeration. - **Bitfields.** The layout of the fields of a bitfield within enclosing integer datatypes is implementation-defined, and application developers may not rely on different toolchains using the same arrangement for a given bitfield declaration. It is recommended that datatypes of specific sizes, such as those defined by `stdint.h`, are used to express data that must be accessed by heterogeneous cores. 7.10 No concurrent USB Host and Device stack capability for ADSP-SC58x Processors The USB stack supports the connection to any of the USB0 (USB OTG) or USB1 (USB HS) ports provided on the ADSP-SC589 EZ-Board, but connection to both the ports at the same time is not supported. However, you can switch between USB0(USB OTG) and USB1(USB HS) ports while your application is running. 7.11 "Target not available" When you launch a debug session on a multi-core processor, such as ADSP-SC58x, ADSP-2158x, ADSP-BF561 or ADSP-BF60x processors, the debugger connects to each core separately. At any given time, one of the cores has the focus, and this defaults to the booting core of the processor. On occasion, the debugger may grant the focus to another core as the connections are created, and this may lead to a "Target not available" message, and an inability to display assembly code in the disassembly window. Simply select the booting core in the Debug pane to grant the focus to the correct core. 7.12 Relaunching Debug Sessions with GDB with OpenOCD or QEMU When using GDB with OpenOCD or QEMU, it is important to 'Terminate and Remove' the session before launching another session. Failure to do so could result in unstable behavior. 7.13 Stdio and Multithreaded Libraries in the GNU ARM Toolchain The GNU ARM toolchain includes multithreaded libraries for use when building RTOS-based applications. The following issues are outstanding, relating to stdio: - Semaphores for the standard streams stdin, stdout and stderr are allocated on a per-thread basis, rather than globally. This means that applications will require an additional three semaphores per thread. - If the first use of one of these three standard streams is as a parameter to one of the printf() family of functions, a NULL-pointer-dereference error will occur. 7.14 QEMU Memory Limitations In CCES 2.0.0, QEMU models the ADSP-SC58x memory only partially. It appears that this has a greater impact on ADSP-SC582 and ADSP-SC584 than on ADSP-SC587 and ADSP-SC589 as ARM programs built for these two processors are more likely to use memory space not supported in QEMU. As a consequence, a program may fail to load in QEMU or may fail to execute when code or data are being accessed from unsupported memory regions. This is addressed in future releases. 7.15 Other Known Issues For the latest anomalies, please consult our Software and Tools Anomalies Search page.
{"Source-Url": "https://download.analog.com/tools/CrossCoreEmbeddedStudio/Releases/Release_2.0.0/CrossCoreEmbeddedStudio-ReleaseNotes-Rel2.0.0.pdf", "len_cl100k_base": 12155, "olmocr-version": "0.1.53", "pdf-total-pages": 31, "total-fallback-pages": 0, "total-input-tokens": 59230, "total-output-tokens": 13753, "length": "2e13", "weborganizer": {"__label__adult": 0.0005183219909667969, "__label__art_design": 0.0005011558532714844, "__label__crime_law": 0.0003352165222167969, "__label__education_jobs": 0.0003647804260253906, "__label__entertainment": 0.00010204315185546876, "__label__fashion_beauty": 0.00025153160095214844, "__label__finance_business": 0.00019478797912597656, "__label__food_dining": 0.0003380775451660156, "__label__games": 0.0018768310546875, "__label__hardware": 0.0292510986328125, "__label__health": 0.00030159950256347656, "__label__history": 0.0002918243408203125, "__label__home_hobbies": 0.00019156932830810547, "__label__industrial": 0.0013093948364257812, "__label__literature": 0.0002129077911376953, "__label__politics": 0.0002142190933227539, "__label__religion": 0.0008521080017089844, "__label__science_tech": 0.0274810791015625, "__label__social_life": 5.030632019042969e-05, "__label__software": 0.0185546875, "__label__software_dev": 0.91552734375, "__label__sports_fitness": 0.0004324913024902344, "__label__transportation": 0.0006537437438964844, "__label__travel": 0.0001962184906005859}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49633, 0.04673]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49633, 0.22125]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49633, 0.84762]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 1889, false], [1889, 3490, null], [3490, 4780, null], [4780, 6568, null], [6568, 8116, null], [8116, 9795, null], [9795, 11829, null], [11829, 14099, null], [14099, 16256, null], [16256, 17746, null], [17746, 18768, null], [18768, 19773, null], [19773, 21297, null], [21297, 23376, null], [23376, 24321, null], [24321, 24767, null], [24767, 25053, null], [25053, 27022, null], [27022, 28848, null], [28848, 31151, null], [31151, 33018, null], [33018, 35011, null], [35011, 36980, null], [36980, 39042, null], [39042, 39710, null], [39710, 41821, null], [41821, 43647, null], [43647, 46198, null], [46198, 48432, null], [48432, 49633, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 1889, true], [1889, 3490, null], [3490, 4780, null], [4780, 6568, null], [6568, 8116, null], [8116, 9795, null], [9795, 11829, null], [11829, 14099, null], [14099, 16256, null], [16256, 17746, null], [17746, 18768, null], [18768, 19773, null], [19773, 21297, null], [21297, 23376, null], [23376, 24321, null], [24321, 24767, null], [24767, 25053, null], [25053, 27022, null], [27022, 28848, null], [28848, 31151, null], [31151, 33018, null], [33018, 35011, null], [35011, 36980, null], [36980, 39042, null], [39042, 39710, null], [39710, 41821, null], [41821, 43647, null], [43647, 46198, null], [46198, 48432, null], [48432, 49633, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49633, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49633, null]], "pdf_page_numbers": [[0, 0, 1], [0, 1889, 2], [1889, 3490, 3], [3490, 4780, 4], [4780, 6568, 5], [6568, 8116, 6], [8116, 9795, 7], [9795, 11829, 8], [11829, 14099, 9], [14099, 16256, 10], [16256, 17746, 11], [17746, 18768, 12], [18768, 19773, 13], [19773, 21297, 14], [21297, 23376, 15], [23376, 24321, 16], [24321, 24767, 17], [24767, 25053, 18], [25053, 27022, 19], [27022, 28848, 20], [28848, 31151, 21], [31151, 33018, 22], [33018, 35011, 23], [35011, 36980, 24], [36980, 39042, 25], [39042, 39710, 26], [39710, 41821, 27], [41821, 43647, 28], [43647, 46198, 29], [46198, 48432, 30], [48432, 49633, 31]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49633, 0.0287]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
26dc54b1ac89714f4fa7dd23b0b8ea3d861b3af1
Combining Relational Algebra, SQL, and Constraint Programming Marco Cadoli and Toni Mancini Dipartimento di Informatica e Sistemistica Università di Roma “La Sapienza” Via Salaria 113, 00198 Roma, ITALY cadoli|tmancini@dis.uniroma1.it Abstract. The goal of this paper is to provide a strong interaction between constraint programming and relational DBMSs. To this end we propose extensions of standard query languages such as relational algebra (RA) and SQL, by adding constraint solving capabilities to them. In particular, we propose non-deterministic extensions of both languages, which are specially suited for combinatorial problems. Non-determinism is introduced by means of a guessing operator, which declares a set of relations to have an arbitrary extension. This new operator results in languages with higher expressive power, able to express all problems in the complexity class NP. Some syntactical restrictions which make data complexity polynomial are shown. The effectiveness of both languages is demonstrated by means of several examples. 1 Introduction The efficient solution of NP-hard combinatorial problems, such as resource allocation, scheduling, planning, etc. is crucial for many industrial applications, and it is often achieved by means of ad-hoc hand-written programs. Specialized programming languages [7,15] or libraries [10] for expressing constraints are commercially available. Data encoding the instance are either in text files in an ad-hoc format, or in standard relational DBs accessed through libraries callable from programming languages such as C++ (cf., e.g., [11]). In other words, there is not a strong integration between the data definition and the constraint programming languages. The goal of this paper is to integrate constraint programming into relational database management systems (R-DBMSs): to this end we propose extensions of standard query languages such as relational algebra (RA) and SQL, by adding constraint solving capabilities to them. In principle RA can be used as a language for testing constraints. As an example, given relations $A$ and $B$, testing whether all tuples in $A$ are contained in $B$ can be done by computing the relation $A-B$, and then checking its emptiness. Anyway, it must be noted that RA is unfeasible as a language for expressing NP-hard problems, since it is capable of expressing just a strict subset of the polynomial-time queries (cf., e.g., [1]). As a consequence, an extension is indeed needed. The proposed generalization of RA is named \textit{NP-Alg}, and it is proven to be capable of expressing all problems in the complexity class NP. We focus on NP because this class contains the decisional version of most combinatorial problems of industrial relevance [8]. \textit{NP-Alg} is RA plus a simple \textit{guessing} operator, which declares a set of relations to have an arbitrary extension. Algebraic expressions are used to express constraints. Several interesting properties of \textit{NP-Alg} are provided: its data complexity is shown to be NP-complete, and for each problem \( \xi \) in NP we prove that there is a fixed query that, when evaluated on a database representing the instance of \( \xi \), solves it. Combined complexity is also addressed. Since \textit{NP-Alg} expresses all problems in NP, an interesting question is whether a query corresponds to an NP-complete or to a polynomial problem. We give a partial answer to it, by exhibiting some syntactical restrictions of \textit{NP-Alg} with polynomial-time data complexity. In the same way, NP-SQL is the proposed non-deterministic extension of SQL, the well-known language for querying relational databases [14], having the same expressive power of \textit{NP-Alg}. We believe that writing an NP-SQL query for the solution of a combinatorial problem is only moderately more difficult than writing SQL queries for a standard database application. The advantage of using NP-SQL is twofold: it is not necessary to learn a completely new language or methodology, and integration of the problem solver with the information system of the enterprise can be done very smoothly. The effectiveness of both \textit{NP-Alg} and NP-SQL as constraint modeling languages is demonstrated by showing several queries which specify combinatorial problems. 2 \textbf{NP-Alg: Syntax and semantics} We refer to a standard definition of RA with the five operators \( \{\sigma, \pi, \times, -, \cup\} \) [1]. Other operators such as \( \textit{"qu"} \) and \( \textit{"/"} \) can be defined as usual. Temporary relations such as \( T = \textit{algebra}(\ldots) \) will be used to make expressions easier to read. As usual queries are defined as mappings which are partial recursive and generic, i.e., constants are uninterpreted. Let \( D \) denote a finite relational database, \( edb(D) \) the set of its relations, and \( DOM \) the unary relation representing the set of all constants occurring in \( D \). \textbf{Definition 1 (Syntax of \textit{NP-Alg})}. An \textit{NP-Alg} expression has two parts: 1. A set \( Q = \{Q_1^{[a_1]}, \ldots, Q_n^{[a_n]}\} \) of new relations of arbitrary arity, denoted as \textit{Guess} \( Q_1^{[a_1]}, \ldots, Q_n^{[a_n]} \). \textit{Sets edb(D) and Q must be disjoint.} 2. An \textit{ordinary expression} \( exp \) of RA on the new database schema \([\text{edb}(D), Q]\). For simplicity, in this paper we focus on \textit{boolean queries}. For this reason we restrict \( exp \) to be a relation which we call \textit{FAIL}. \textbf{Definition 2 (Semantics of \textit{NP-Alg})}. The semantics of an \textit{NP-Alg} expression is as follows: 1. For each possible extension \( ext \) of the relations in \( Q \) with elements in \( \text{DOM} \), the relation \( \text{FAIL} \) is evaluated, using ordinary rules of RA. 2. If there is an extension \( ext \) such that \( \text{FAIL} = \emptyset \), the answer to the boolean query is “yes” (denoted as \( \text{FAIL} \land \emptyset \)). Otherwise the answer is “no”. When the answer is “yes”, the extension of relations in \( Q \) is a solution for the problem instance. A trivial implementation of the above semantics obviously requires exponential time, since there are exponentially many possible extensions of the relations in \( Q \). Anyway, as we will show in Section 4.3, some polynomial-time cases indeed exist. The reason why we focus on a relation named \( \text{FAIL} \) is that, typically, it is easy to specify a decision problem as a set of constraints (cf. forthcoming Sections 3 and 5). As a consequence, an instance of the problem has a solution iff there is an arbitrary choice of the guessed relations such that all constraints are satisfied, i.e., \( \text{FAIL} = \emptyset \). A \( \text{FOUND}^{(1)} \) query can be anyway defined as \( \text{FOUND} = \text{DOM} - \pi(\text{DOM} \times \text{FAIL}) \). In this case, the answer is “yes” iff there is an extension \( ext \) such that \( \text{FOUND} \neq \emptyset \). 3 Examples of \( \text{NP-Alg} \) queries In this section we show the specifications of some NP-complete problems, as queries in \( \text{NP-Alg} \). All examples are on uninterpreted structures, i.e., on unlabeled directed graphs, because we adopt a pure RA with uninterpreted constants. As a side-effect, the examples show that, even in this limited setting, we are able to emulate integers and ordering. This is very important, because the specification of very simple combinatorial problems requires integers and ordering. In Section 5 we use the full power of \( \text{NP-SQL} \) to specify some real-world problems. 3.1 \( k \)-colorability We assume a directed graph is represented as a pair of relations \( \text{NODES}^{(1)} \) and \( \text{EDGES}^{(2)}(\text{from}, \text{to}) \) (\( \text{DOM} = \text{NODES} \)). A graph is \( k \)-colorable if there is a \( k \)-partition \( Q_1^{(1)}, \ldots , Q_k^{(1)} \) of its nodes, i.e., a set of \( k \) sets such that: - \( \forall i \in [1,k], \forall j \in [1,k], j \neq i \rightarrow Q_i \cap Q_j = \emptyset , \) - \( \bigcup_{i=1}^{k} Q_i = \text{NODES} , \) and each set \( Q_i \) has no pair of nodes linked by an edge. The problem is well-known to be NP-complete (cf., e.g., [8]), and it can be specified in \( \text{NP-Alg} \) as follows: Guess $Q_1^{[1]}, \ldots, Q_k^{[1]}$; $$FAIL\_DISJOINT = \bigcup_{i=1, \ldots, k} Q_i \not\equiv Q_j;$$ (1b) $$FAIL\_COVER = NODES \Delta \bigcup_{i=1}^k Q_i;$$ (1c) $$FAIL\_PARTITION = FAIL\_DISJOINT \cup FAIL\_COVER;$$ (1d) $$FAIL\_COLORING = \prod_{S_1} \left[ \bigcup_{i=1}^k \left( \sigma_{S_1 \neq S_2} (Q_i \times Q_1) \right) \right]^{\not\equiv}_{S_1 = EDGES \_from, S_2 = EDGES \_to \ EDGES}$$ (1e) $$FAIL = FAIL\_PARTITION \cup FAIL\_COLORING.$$ (1f) Expression (1a) declares $k$ new relations of arity 1. Expression (1f) collects all constraints a candidate coloring must obey to: - (1b) and (1c) make sure that $Q$ is a partition of NODES (“$\Delta$” is the symmetric difference operator, i.e., $A \Delta B = (A - B) \cup (B - A)$, useful for testing equality since $A \Delta B = \emptyset \iff A = B$). - (1e) checks that each set $Q_i$ has no pair of nodes linked by an edge. We observe that in the specification above the $FAIL\_PARTITION$ relation (1d) makes sure that an extension of $Q_1^{[1]}, \ldots, Q_k^{[1]}$ is a $k$-partition of NODES. Such an expression can be very useful for the specification of problems, so we introduce a metaexpression: $$failPartition^{(1)}(N^{(k)}, P_1^{(k)}, \ldots, P_n^{(k)}),$$ which returns an empty relation iff $\{P_1^{(k)}, \ldots, P_n^{(k)}\}$ is a partition of $N^{(k)}$. The prefix $fail$ in the name of the metaexpression reminds that it should be used in checking constraints. Other metaexpressions will be introduced in the following examples, and are summarized in Section 3.4. ### 3.2 Independent set Let a (directed) graph be defined, as usual, with the two relations $NODES^{(1)}$ and $EDGES^{(2)}$, and let $k \leq |NODES|$ be an integer, which is specified by a relation $K^{(1)}$ containing exactly $k$ tuples. A subset $N$ of NODES, with $|N| \geq k$ is said to be an independent set of size at least $k$ of the graph if $N$ contains no pair of nodes linked by an edge. The problem of determining whether an input graph has an independent set of size at least $k$ is NP-complete (cf., e.g., [8]), and it can be easily specified in NP-Alg. However, since we have to “count” the elements of $N$, before presenting the NP-Alg query for the independent set problem, we show a method to determine whether two relations \( N^{(1)} \) and \( K^{(1)} \) have the same cardinality or not. Consider the following NP-Alg query: \[ \text{Guess } NK^{(2)}; \\ FAIL = \left( \pi\left( NK \right) \Delta N \right) \cup \left( \pi\left( NK \right) \Delta K \right) \cup \\ \pi\left( \begin{array}{c} NK \\ \setminus \frac{\delta_1}{s_1} \\ \frac{\delta_2}{s_2} \end{array} \right) \cup \pi\left( \begin{array}{c} NK \\ \setminus \frac{s_1}{s_1} \\ \frac{s_2}{s_2} \end{array} \right). \] The idea is to guess a binary relation \( NK \) which is a bijection between \( N \) and \( K \). The first (resp. second) subexpression discards all candidates such that the first (resp. second) column is not the same as \( N \) (resp. \( K \)). The two joins \((\cdot)\) make sure that exactly one \( N \) value is paired to exactly one \( K \) value (and vice versa). As a consequence, \( FAIL \otimes \emptyset \) iff \( N \) and \( K \) have the same cardinality. Obviously, deleting the first (resp. second) join, \( FAIL \otimes \emptyset \) iff \( |N| \geq |K| \) (resp. \( |N| \leq |K| \)). Given the reusability of the previous expression, we define the metaexpressions \( failSameSize^{(1)}(N,K) \), \( failGeqSize^{(1)}(N,K) \), \( failLeqSize^{(1)}(N,K) \) as shortcuts for the respective definitions. So, an NP-Alg query that specifies the independent set problem is the following: \[ \text{Guess } N^{(1)}; \\ FAIL = failGeqSize^{(1)}(N,K) \cup \pi\left( (N \times N) \setminus \frac{\delta_1}{EDGES,from} \setminus \frac{\delta_2}{EDGES, to} \right). \] The former subexpression of \( FAIL \) specifies the constraint \( |N| \geq k \) (to enhance readability, the guessing of the \( NK \) relation, used only by the metaexpression, is omitted). The latter one returns an empty relation iff no pair of nodes in \( N \) is linked by an edge. An extension of \( N \) is an independent set (with size at least \( k \)) of the input graph iff the corresponding \( FAIL \) relation is empty. ### 3.3 More examples We can specify in NP-Alg other famous problems over graphs like dominating set, transitive closure (TC), and Hamiltonian path (HP). We remind that TC, indeed a polynomial-time problem, is not expressible in RA (cf., e.g., [1]), because it intrinsically requires a form of recursion. In NP-Alg recursion can be simulated by means of guessing. HP is the problem of finding a traversal of a graph which touches each node exactly once. The possibility to specify HP in NP-Alg has some consequences which deserve some comments. Consider a unary relation \( \text{DOM} \), with \(|\text{DOM}| = M \neq 0 \) and the complete graph \( C \) defined by the relations \( \text{NODES} = \text{DOM} \) and \( \text{EDGES} = \text{DOM} \times \text{DOM} \). An HP \( H \) of \( C \) is a total ordering of the \( M \) elements in \( \text{DOM} \); in fact it is a successor relation. The transitive closure of \( H \) is the corresponding less-than relation. As a consequence, considering a bijection between the \( M \) elements in \( \text{DOM} \) and the subset \([1,M]\) of the integers, we actually have the possibility to “count” between 1 and \( M \). Furthermore, the Hamiltonian paths of \( C \) correspond to the permutations of \([1,M]\). Once the elements in \( \text{DOM} \) have been ordered (so we can consider them as integers), we can introduce arithmetic operations. Permutations are very useful for the specification of several problems. As an example, in the \( n \)-queens problem (in which the goal is to place \( n \) non-attacking queens on an \( n \times n \) chessboard) a candidate solution is a permutation of order \( n \), representing the assignment of a pair (row, column) to each queen. Interestingly, to check the attacks of queens on diagonals, in NP-Alg we can guess a relation encoding the subtraction of elements in \( \text{DOM} \). Finally, in the full paper we show the specification of other problems not involving graphs, such as satisfiability of a propositional formula and evenness of the cardinality of a relation. ### 3.4 Useful syntactic sugar Previous examples show that guessing relations as subsets of \( \text{DOM}^k \) (for integer \( k \)) is enough to express many NP-complete problems. Forthcoming Theorem 3 shows that this is indeed enough to express all problems in NP. Nevertheless, metaexpressions such as \( \text{failPartition} \) can make queries more readable. In this section we briefly summarize the main metaexpressions we designed. - \( \text{empty}^{(1)}(R) = \text{DOM} - \pi(\text{DOM} \times R^{(k)}) \), returns an empty relation if \( R \) is a non-empty one (and vice versa). - \( \text{complement}^{(k)}(R^{(k)}) \) returns the active complement (wrt \( \text{DOM}^k \)) of \( R \). - \( \text{failPartition}^{(1)}(N^{(k)}, P_1^{(k)}, \ldots, P_n^{(k)}) \) (cf. Subsection 3.1) returns an empty relation iff \( \{P_1^{(k)}, \ldots, P_n^{(k)}\} \) is a partition of \( N \). - \( \text{failSuccessor}^{(1)}(\text{Succ}^{(2)}), N^{(k)} \) returns an empty relation iff \( \text{Succ} \) encodes a correct successor relation on elements in \( N \), i.e., a 1-1 correspondence with the interval \([1,|N|]\). - \( \text{failSameSize}^{(1)}(N,K) \), \( \text{failGeqSize}^{(1)}(N,K) \), \( \text{failLeqSize}^{(1)}(N,K) \) (cf. Subsection 3.2) return an empty relation iff \( |N| \) is, respectively, \( =, \geq, \leq \) \( |K| \). We remark that a relation \( NK \) satisfying \( \text{failGeqSize}^{(1)}(N,K) \) is actually a function with domain \( N \) and range \( K \). Since elements in \( K \) can be ordered (cf. Subsection 3.3), \( NK \) is also an integer function from elements of \( N \) to the interval \([1,|K|]\). Integer functions are very useful for the specification of resource allocation problems, such as integer knapsack (see also examples in Section 5.2). In the full paper we show that we can guess general functions (total, partial, injective, surjective) from a given domain to a given range. - $failPermution^1(Perm^{(2k)}, N^{(k)})$ returns an empty relation iff $Perm$ is a permutation of the elements in $N$. The ordering sequence is given by the first $k$ columns of $Perm$. 4 Computational aspects of NP-Alg In this section we focus on the main computational aspects of NP-Alg: data and combined complexity, expressive power, and polynomial fragments. 4.1 Data and combined complexity The data complexity, i.e., the complexity of query answering assuming the database as input and a fixed query (cf. [1]), is one of the most important computational aspects of a language, since queries are typically small compared to the database. Since we can express NP-complete problems in NP-Alg (cf. Section 3), the problem of deciding whether $FAIL \otimes \emptyset$ is NP-hard. Since the upper bound is clearly NP, we have the first computational result on NP-Alg. **Theorem 1.** The data complexity of deciding whether $FAIL \otimes \emptyset$ for an NP-Alg query, where the input is the database, is NP-complete. Another interesting measure is combined complexity, where both the database and the query are part of the input. It is possible to show that, in this case, to determine whether $FAIL \otimes \emptyset$ is hard for the complexity class NE defined as $\bigcup_{c>1} NTIME (2^{cn})$ (cf. [13]), i.e., the class of all problems solvable by a non-deterministic machine in time bounded by $2^{cn}$, where $n$ is the size of the input and $c$ is an arbitrary constant. **Theorem 2.** The combined complexity of deciding whether $FAIL \otimes \emptyset$ for an NP-Alg query, where the input is both the database and the query, is NE-hard. In the full paper the theorem is proved by reducing the NE-complete problem of the succinct 3-colorability of a graph [12] into the problem of deciding if an NP-Alg query has a solution. 4.2 Expressive power The expressiveness of a query language characterizes the problems that can be expressed as fixed, i.e., instance independent, queries. In this section we prove the main result about the expressiveness of NP-Alg, by showing that it captures exactly NP, or equivalently (cf. [6]) queries in the existential fragment of second-order logic ($SO_2$). Of course it is very important to be assured that we can express all problems in the complexity class NP. In fact, Theorem 1 says that we are able to express some problems in NP. We remind that the expressive power of a language is in general less than or equal to its data complexity. In other words, there exist languages whose data complexity is hard for class $C$ in which not every query in $C$ can be expressed; several such languages are known, cf., e.g., [1]. In the following, $\sigma$ denotes a fixed set of relational symbols not including equality "=" , and $S$ denotes a list of variables ranging over relational symbols distinct from those in $\sigma$. By Fagin's theorem [6] any NP-recognizable collection $D$ of finite databases over $\sigma$ is defined by a second-order existential formula. In particular, we deal with second-order formulae of the following kind: $$(\exists S)(\forall X)(\exists Y) \varphi(X, Y),$$ where $\varphi$ is a first-order formula containing variables among $X, Y$ and involving relational symbols in $\sigma \cup S \cup \{=\}$. The reason why we can restrict our attention to second-order formulae in the above normal form is explained in [12]. As usual, "=" is always interpreted as "identity". We illustrate a method that transforms a formula of the kind (2) into an NP-Alg expression $\psi$. The transformation works in two steps: 1. the first-order formula $\varphi(X, Y)$ obtained by eliminating all quantifiers from (2) is translated into an expression $PH1$ of plain RA; 2. the expression $\psi$ is defined as: $$Guess \; Q^{(a_1)}_1, \ldots, Q^{(a_n)}_n; \; FAIL = DOM|X| - \pi(PHI),$$ where $a_1, \ldots, a_n$ are the arities of the $n$ predicates in $S$, and $|X|$ is the number of variables occurring in $X$. The first step is rather standard, and is briefly sketched here just for future reference. A relation $R$ (with the same arity) is introduced for each predicate symbol $r \in \sigma \cup S$. An atomic formula of first-order logic is translated as the corresponding relation, possibly prefixcd by a selection that accounts for constant symbols and/or repeated variables, and by a renaming of attributes mapping the arguments. Selection can be used also for dealing with atoms involving equality. Inductively, the relation corresponding to a complex first-order formula is built as follows: - $f \land g$ translates into $F \bowtie G$, where $F$ and $G$ are the translations of $f$ and $g$, respectively; - $f \lor g$ translates into $F' \cup G'$, where $F'$ and $G'$ are derived from the translations $F$ and $G$ to account for the (possibly) different schemata of $f$ and $g$; - $\neg f(Z)$ translates into $\rho_{s_1 \rightarrow F, s_1} (DOM|Z| - F)$. $$s | Z | \rightarrow P, s | Z |$$ Relations obtained through such a translation will be called \textit{q-free}. The following theorem claims that the above translation is correct. \textbf{Theorem 3.} For any NP-recognizable collection \(D\) of finite databases over \(\sigma\)-characterized by a formula of the kind \((2)\) – a database \(D\) is in \(D\), i.e., \(D \models (\exists S)(\forall X)(\exists Y) \varphi(X, Y)\), if and only if FAIL\(\oplus\emptyset\), when \(\psi\) (cf. formula \((3)\)) is evaluated on \(D\). \subsection{4.3 Polynomial fragments} Polynomial fragments of second-order logic have been presented in, e.g., [9]. In this section we use some of those results to show that it is possible to isolate polynomial fragments of NP-Alg. \textbf{Theorem 4.} Let \(s\) be a positive integer, PHI a \textit{q-free} expression of RA on a relational vocabulary edb(D) \(\cup\\{Q^{[s]}\}\), and \(Y_1, Y_2\) the names of two attributes of PHI. An NP-Alg query of the form: \[ \text{Guess } Q^{[s]}; \quad \text{FAIL} = (DOM \times DOM) - \pi_{Y_1, Y_2}(PHI). \] can be evaluated in polynomial time. Some interesting queries obeying the above restriction can indeed be formulated. As an example, \textit{2-colorability} can be specified as follows (when \(k = 2\), \(k\)-colorability, cf. Section 3.1, becomes polynomial): \text{Guess } C^{[1]}; \[ \text{FAIL} = DOM \times DOM - \left[\text{complement}(EDGES) \cup C \times \text{complement}(C) \cup \text{complement}(C) \times C\right]. \] \(C\) and its complement denote the 2-partition. The constraint states that each edge must go from one subset to the other one. Another polynomial problem of this class is \textit{2-partition into cliques} (cf., e.g., [8]), which amounts to decide whether there is a 2-partition of the nodes of a graph such that the two induced subgraphs are complete. An NP-Alg expression which specifies the problem is: \text{Guess } P^{[1]}; \[ \text{FAIL} = DOM \times DOM - \left[\text{complement}(P) \times P \cup P \times \text{complement}(P) \cup EDGES\right]. \] A second polynomial class (in which, e.g., the \textit{disconnectivity} problem, i.e., to check whether a graph is not connected, can be expressed) is defined by the following theorem. Theorem 5. Let $\Phi H(\mathbf{X}_1, \ldots, \mathbf{X}_k, \mathbf{Y}_1, \mathbf{Y}_2)$ ($k > 0$) be a $q$-free expression of RA on a relational vocabulary $eb(D) \cup \{Q^{[1]}\}$. An NP-Alg query of the form: Guess $Q^{[1]}$; $$X(\mathbf{X}_1, \ldots, \mathbf{X}_k) = \Phi H(\mathbf{X}_1, \ldots, \mathbf{X}_k, \mathbf{Y}_1, \mathbf{Y}_2) / \rho \ (DOM \times DOM);$$ $\mathbf{S}_1 \rightarrow \mathbf{Y}_1,$ $\mathbf{S}_2 \rightarrow \mathbf{Y}_2$ FAIL = empty($X$). can be evaluated in polynomial time. The classes identified by the above theorems correspond respectively to the $Eaa$ and $E_{1}^{*}aa$ classes of [9], which are proved to be polynomial by a mapping into instances of 2SAT. 5 The NP-SQL language In this section we describe the NP-SQL language, a non-deterministic extension of SQL having the same expressive power as NP-Alg, and present some specifications written in this language. 5.1 Syntax of NP-SQL NP-SQL is a strict superset of SQL. The problem instance is described as a set of ordinary tables, using the data definition language of SQL. The novel construct CREATE PROBLEM is used to specify a problem. It has two parts, which correspond to the two parts of Definition 1: 1. definition of the guessed tables, by means of the new keyword GUESS; 2. specification of the constraints that must be satisfied by guessed tables, by means of the standard SQL keyword CHECK. Furthermore, the user can specify the desired output by means of the new keyword RETURN. In particular, the output is computed when an extension of the guessed tables satisfying all constraints is found. Of course, it is possible to specify many guessed tables, constraints and return tables. The syntax is as follows (keywords are either capitalized or quoted): ``` CREATE PROBLEM problem_name '(' GUESS TABLE table_name [('aliases')'] AS guessed_table_spec+ CHECK ('condition')+ RETURN TABLE return_table_name AS query)* ') ``` The guessed table table_name gets its schema from its definition guessed_table_spec. The latter expression is similar to a standard SELECT-FROM-WHERE SQL query, except for the FROM clause which can contain also expressions such as: SUBSET OF SQL FROM clause | [TOTAL | PARTIAL] FUNCTION_TO ' (range_table | min .. max) ' | AS field_name_list OF SQL FROM clause | (PARTITION ' (n ') | PERMUTATION) AS field_name OF SQL FROM clause | with SQL FROM clause being the content of an ordinary SQL FROM clause (e.g., a list of tables). The schema of such expressions consists in the attributes of SQL FROM clause, plus the extra field_name (or field_name_list), if present. In the FROM clause the user is supposed to specify the shape of the search space, either as a plain subset (like in NP-Alg), or as a mapping (i.e., partition, permutation, or function) from the domain defined by SQL FROM clause. Mappings require the specification of the range and the name of the extra field(s) containing range values. As for PERMUTATION, the range is implicitly defined to be a subset of integers. As for FUNCTION TO the range can be either an interval min .. max of a SQL enumerable type, (e.g., integers) or the set of values of the primary key of a table denoted by range_table. The optional keyword PARTIAL means that the function can be defined over a subset of the domain (the default is TOTAL). We remind that using partitions, permutations or functions does not add any expressive power to the language (cf. Section 3.4.) Finally, the query that defines a return table is an ordinary SQL query on the tables defining the problem instance plus the guessed ones, and it is evaluated for an extension of the guessed tables satisfying all constraints. Once a problem has been specified, its solution can be obtained with an ordinary SQL query on the return tables: SELECT field_name_list FROM problem_name.return_table_name WHERE cond The table ANSWER(n INTEGER) is implicitly defined locally to the CREATE PROBLEM construct, and it is empty iff the problem has no solution. 5.2 Examples In this section we exhibit the specification of some problems in NP-SQL. In particular, to highlight its similarity with NP-Alg, we show the specification of the graph coloring problem of Section 3.1. Afterwards, we exploit the full power of the language and show how some real-world problems can be easily specified. **k-colorability** We assume an input database containing relations NODES(n), EDGES(f, t) (encoding the graph), and COLORS(id, name) (listing the k colors). CREATE PROBLEM Graph_Coloring ( // COLORING contains tuples of // the kind <NODES.n, COLORS.id>, // with COLORS.id arbitrarily chosen. GUESS TABLE COLORING AS SELECT n, color FROM TOTAL FUNCTION_TO(COLORS) AS color OF NODES CHECK ( NOT EXISTS ( SELECT * FROM COLORING C1, COLORING C2, EDGES WHERE C1.n <> C2.n AND C1.c = C2.c This query states that the function to be defined takes as input a coloring function for the nodes, and for each edge it assigns a color to the vertices of the edge. The function to be defined can then select a node and try to assign a color to it, but the assignment will only be valid if the colors of the neighbors of the node are different. This query specifies that the function to be defined takes as input a coloring function for the nodes, and for each edge it assigns a color to the vertices of the edge. The function then selects a node and tries to assign a color to it, but the assignment is only valid if the colors of the neighbors of the node are different. The GUESS part of the problem specification defines a new (binary) table COLORING, with fields \texttt{n} and \texttt{color}, as a total function from the set of NODES to the set of COLORS. The CHECK statement expresses the constraint an extension of COLORING table must satisfy to be a solution of the problem, i.e., not two distinct nodes linked by an edge are assigned the same color. The RETURN statement defines the output of the problem by a query that is evaluated for an extension of the guessed table which satisfies every constraint. The user can ask for such a solution with the statement \begin{verbatim} SELECT * FROM Graph_Coloring.SOLUTION \end{verbatim} As described in the previous subsection, if no coloring exists, the system table \texttt{Graph_Coloring.ANSWER} will contain no tuples. This can be easily checked by the user, in order to obtain only a significant \texttt{Graph_Coloring.SOLUTION} table. **Aircraft landing** The aircraft landing problem \cite{2} consists in scheduling landing times for aircraft. Upon entering within the radar range of the air traffic control (ATC) at an airport, a plane requires a landing time and a runway on which to land. The landing time must lie within a specified time window, bounded by an earliest time and a latest time, depending on the kind of the aircraft. Each plane has a most economical, preferred speed. A plane is said to be assigned its target time, if it is required to fly in to land at its preferred speed. If ATC requires the plane to either slow down or speed up, a cost incurs. The bigger the difference between the assigned landing time and the target landing time, the bigger the cost. Moreover, the amount of time between two landings must be greater than a specified minimum (the separation time) which depends on the planes involved. Separation times depend on the aircraft landing on the same or different runways (in the latter case they are smaller). Our objective is to find a landing time for each planned aircraft, encoded in a guessed relation \texttt{LANDING}, satisfying all the previous constraints, and such that the total cost is less than or equal to a given threshold. The input database consists of the following relations: - \texttt{AIRCRAFT}(\texttt{id}, \texttt{target\_time}, \texttt{earliest\_time}, \texttt{latest\_time}, \texttt{bef\_cost}, \texttt{aft\_cost}), listing aircraft planned to land, together with their target times and landing time windows; the cost associated with a delayed or advanced landing at time \( x \) is given by \( \texttt{bef\_cost} \cdot \text{Max}[0,t-x] + \texttt{aft\_cost} \cdot \text{Max}[0,x-t] \), where \( t \) is the aircraft target time. - \texttt{RUNWAY}(\texttt{id}) listing all the runways of the airport. - \texttt{SEPARATION}(i, j, interval, same\_runway) (same\_runway is a boolean field specifying whether aircraft \( i \) and \( j \) land on the same runway or not). A tuple \((i, j, int, s)\) means that if aircraft \( j \) lands after aircraft \( i \), then landing times must be separated by int. There are two such values, for same\_runway = 0 and 1, respectively. The relation contains a tuple for all combinations of i, j, and same\_runway. - \textit{MAX}\_\textit{COST}(c), containing just one tuple, the total cost threshold. In the following specification, the search space is a total function which as- signs an aircraft to a landing time (minutes after midnight) and a runway. \textbf{CREATE PROBLEM Aircraft\_landing (} \textbf{GUESS TABLE LANDING(aircraft, runway, time) AS} \text{SELECT a1.id, runway, time} \text{FROM TOTAL FUNCTION TO(RUNWAY) AS runway OF AIRCRAFT a1,} \text{TOTAL FUNCTION TO(0..24*60-1) AS time OF AIRCRAFT a2} \text{WHERE a1.id = a2.id} // Time window constraints \textbf{CHECK ( NOT EXISTS (} \text{SELECT * FROM LANDING l, AIRCRAFT a WHERE l.aircraft = a.id} \text{AND ( l.time > a.latest\_time OR l.time < a.earliest\_time )} \text{))} // Separation constraints \textbf{CHECK ( NOT EXISTS (} \text{SELECT * FROM LANDING l1, LANDING l2, SEPARATION sep} \text{WHERE l1.aircraft <> l2.aircraft AND ((} \text{ l1.time <= l2.time AND sep.i = l1.aircraft AND} \text{ sep.j = l2.aircraft AND (l2.time - l1.time) < sep.interval)} \text{ OR (l1.time > l2.time AND sep.i = l2.aircraft AND} \text{ sep.j = l1.aircraft AND (l1.time - l2.time) < sep.interval}))} \text{AND ( ( l1.runway = l2.runway AND sep.same\_runway = 1 )} \text{ OR ( l1.runway <> l2.runway AND sep.same\_runway = 0 )} \text{))} // Cost constraint \textbf{CHECK ( NOT EXISTS (} \text{SELECT * FROM MAXCOST WHERE MAXCOST.c < (} \text{SELECT SUM(cost) FROM (} \text{SELECT a.id, (a.bef\_cost * (a.target\_time - l.time)) AS cost} \text{FROM AIRCRAFT a, LANDING l} \text{WHERE a.id = l.aircraft AND l.time <= a.target} \text{UNION} \text{SELECT a.id, (a.aft\_cost * (1.time - a.target\_time)) AS cost} \text{FROM AIRCRAFT a, LANDING l} \text{WHERE a.id = l.aircraft AND l.time > a.target} \text{) AIRCRAFT\_COST // Contains tuples <aircraft, cost>} \text{))} \text{RETURN TABLE SOLUTION AS SELECT * FROM LANDING} \text{)} \textbf{5.3 NP-SQL SIMULATOR} NP-SQL SIMULATOR is an application written in Java, which works as an interface to a traditional R-DBMS. It simulates the behavior of an NP-SQL server by reading an input text file containing a problem specification (in the NP-SQL language), and looking for a solution. CREATE PROBLEM constructs are parsed, creating the new tables (corresponding to the guessed ones) and an internal representation of the search space; ordinary SQL statements, instead, are sent directly to the DBMS. The search space is explored, looking for an element corresponding to a solution, by posing appropriate queries to the R-DBMS (set so as to work in main memory). As soon as a solution is found, the results of the query specified in the RETURN statements are accessible to the user. In the current implementation, used mainly to check correctness of specifications, a simple-minded enumeration algorithm is used to explore the search space. In the future, we plan to perform the exploration by performing a translation of the problem specification to a third party constraint programming system or to an instance of the propositional satisfiability problem. The latter approach has indeed been proven to be promising in [4]. 6 Conclusions, related and future work In this paper we have tackled the issue of strong integration between constraint programming and up-to-date technology for storing data. In particular we have proposed constraint languages which have the ability to interact with data repositories in a standard way. To this end, we have presented NP-Alg, an extension of relational algebra which is specially suited for combinatorial problems. The main feature of NP-Alg is the possibility of specifying, via a form of non-determinism, a set of relations that can have an arbitrary extension. This allows the specification of a search space suitable for the solution of combinatorial problems, with ordinary RA expressions defining constraints. Although NP-Alg provides just a very simple guessing operator, many useful search spaces, e.g., permutations and functions, can be defined as syntactic sugar. Several computational properties of NP-Alg have been shown, including data and combined complexity, and expressive power. Notably, the language is shown to capture exactly all the problems in the complexity class NP, which includes many combinatorial problems of industrial relevance. In the same way, we have proposed NP-SQL, a non-deterministic extension of SQL with the same expressive power of NP-Alg. The effectiveness of NP-Alg and NP-SQL both as complex query and constraint modeling languages has been demonstrated by showing several queries which specify combinatorial problems. As for future work, we plan to increase the number of polynomial cases of NP-Alg, in particular considering classical results on the complexity of second-order logic. Moreover, we plan to extend both languages to account for optimization problems, and to make a significantly more sophisticated implementation of NP-SQL simulator by using efficient constraint propagation techniques (e.g., by translation into propositional satisfiability [4]), and making it able to recognize the polynomial cases. Several query languages capable of capturing the complexity class NP have been shown in the literature. As an example, in [12] an extension of datalog (the well-known recursive query language) allowing negation are proved to have such a property. A different extension of datalog, without negation but with a form of non-determinism, is proposed in [3]. On the other hand, NP-Alg captures NP without recursion. Actually, recursion can be simulated by non-determinism, and it is possible to write, e.g., the transitive closure query in NP-Alg. Several languages for constraint programming are nowadays available. For some of them, e.g., ECLIPSe [5], a traditional programming language such as PROLOG is enhanced by means of specific constructs for specifying constraints, which are then solved by highly optimized algorithms. In other modeling languages such as OPL [15] and AMPL [7], the problem is specified by means of an ad-hoc syntax. Similarly to NP-Alg and NP-SQL they support a clear distinction between the data and the problem description level. OPL has also a constraint programming language which allows the user to express preferences on the search methods, which is missing in the current version of NP-SQL. References 5. ECLIPSe Home page, www-icparc.doc.ic.ac.uk/eclipse/.
{"Source-Url": "http://tmancini.di.uniroma1.it/research/allpubs/CI1%20%5bcado-manc-02%20--%20cadoli-mancini%2c02%2cfrocos%5d.pdf", "len_cl100k_base": 10168, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 95276, "total-output-tokens": 11781, "length": "2e13", "weborganizer": {"__label__adult": 0.0003437995910644531, "__label__art_design": 0.0003216266632080078, "__label__crime_law": 0.0003807544708251953, "__label__education_jobs": 0.001171112060546875, "__label__entertainment": 0.00010061264038085938, "__label__fashion_beauty": 0.00016760826110839844, "__label__finance_business": 0.0003936290740966797, "__label__food_dining": 0.0004858970642089844, "__label__games": 0.0007100105285644531, "__label__hardware": 0.0008687973022460938, "__label__health": 0.0007061958312988281, "__label__history": 0.0003180503845214844, "__label__home_hobbies": 0.00014865398406982422, "__label__industrial": 0.0007910728454589844, "__label__literature": 0.00034999847412109375, "__label__politics": 0.0002930164337158203, "__label__religion": 0.0004954338073730469, "__label__science_tech": 0.10394287109375, "__label__social_life": 0.00010728836059570312, "__label__software": 0.0129852294921875, "__label__software_dev": 0.87353515625, "__label__sports_fitness": 0.00028824806213378906, "__label__transportation": 0.0007996559143066406, "__label__travel": 0.0002275705337524414}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40656, 0.02236]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40656, 0.6529]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40656, 0.86674]], "google_gemma-3-12b-it_contains_pii": [[0, 2405, false], [2405, 5645, null], [5645, 8311, null], [8311, 10522, null], [10522, 13104, null], [13104, 16506, null], [16506, 18810, null], [18810, 21574, null], [21574, 23800, null], [23800, 25978, null], [25978, 29321, null], [29321, 32341, null], [32341, 34588, null], [34588, 37629, null], [37629, 40656, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2405, true], [2405, 5645, null], [5645, 8311, null], [8311, 10522, null], [10522, 13104, null], [13104, 16506, null], [16506, 18810, null], [18810, 21574, null], [21574, 23800, null], [23800, 25978, null], [25978, 29321, null], [29321, 32341, null], [32341, 34588, null], [34588, 37629, null], [37629, 40656, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40656, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40656, null]], "pdf_page_numbers": [[0, 2405, 1], [2405, 5645, 2], [5645, 8311, 3], [8311, 10522, 4], [10522, 13104, 5], [13104, 16506, 6], [16506, 18810, 7], [18810, 21574, 8], [21574, 23800, 9], [23800, 25978, 10], [25978, 29321, 11], [29321, 32341, 12], [32341, 34588, 13], [34588, 37629, 14], [37629, 40656, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40656, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
09bcb32569b19bfc8b9dd3f0b628331a37b8a672
Automatic Repair of Buggy If Conditions and Missing Preconditions with SMT Favio Demarco, Jifeng Xuan, Daniel Le Berre, Martin Monperrus To cite this version: HAL Id: hal-00977798 https://hal.archives-ouvertes.fr/hal-00977798 Submitted on 11 Apr 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Automatic Repair of Buggy If Conditions and Missing Preconditions with SMT Favio DeMarco Universidad de Buenos Aires Buenos Aires, Argentina Jifeng Xuan INRIA Lille - Nord Europe Lille, France Daniel Le Berre University of Artois & CNRS Lens, France Martin Monperrus University of Lille & INRIA Lille, France ABSTRACT We present Nopol, an approach for automatically repairing buggy if conditions and missing preconditions. As input, it takes a program and a test suite which contains passing test cases modeling the expected behavior of the program and at least one failing test case embodying the bug to be repaired. It consists of collecting data from multiple instrumented test suite executions, transforming this data into a Satisfiability Modulo Theory (SMT) problem, and translating the SMT result – if there exists one – into a source code patch. Nopol repairs object oriented code and allows the patches to contain nullness checks as well as specific method calls. Categories and Subject Descriptors D.1.2 [Programming Techniques]: Automatic Programming; D.2.5 [Software Engineering]: Testing and Debugging General Terms Algorithms, Verification Keywords Automatic repair, test suite, buggy if condition, missing precondition, SMT, angelic fix localization 1. INTRODUCTION Automatic software repair consists in automatically fixing known bugs in a program. For instance, an automatic software repair approach can generate a patch that makes a failing test case pass. This is “test-suite based program repair”, as pioneered by Le Goues et al. [7], and further explored by Nguyen et al. [15] as well as Kim et al. [11]. The main motivation of automatic software repair is to decrease the cost of fixing bugs. The synthesized patches can be proposed as potential solutions to developers [18] or used as is when it is urgent. In the latter case, it is believed that having a draft solution eases the time needed to comprehend the bug and to design a fix [7, 15, 11]. In the context of automatic repair, a fault model refers to the kind of bugs that can be fixed with a given approach [12, 14]. In this paper, we concentrate on the following fault model: the automatic repair of buggy if conditions and missing preconditions. Both are members of the family of condition-related bugs. Pan et al. [16] as well as Martinez and Monperrus [13] have shown that fixes of such bugs are among the most common ones. Our repair approach, called Nopol, repairs buggy if conditions and missing preconditions of object-oriented source code written in Java. For instance, Nopol can synthesize a patch that adds a precondition as shown in Listing 1. ``` +if (l!=null && l.size()>0) { compute(l); + ``` Listing 1: Example of synthesized patch. Nopol repairs buggy if conditions and missing preconditions of object-oriented source code written in Java. Nopol is an abbreviation for “No-Polillas” in Spanish, which literally means “No-Moth”. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. CSTVA ’14, May 31 – June 7, 2014, Hyderabad, India Copyright 2014 ACM 978-1-4503-2847-0/14/05 ...$15.00. must output “false” for the failing test case\(^2\). The “false” is a fix oracle. Then, NOPOL collects information from test suite execution through code instrumentation. This information is basically the local program state at each fix point. This information contains both “primitive” values (integer, booleans) as well as object-oriented ones (nullness, object state). Next, those runtime traces are transformed into a Satisfiability Modulo Theory (SMT) problem. An SMT solver says whether there exists a solution. If such a solution exists, it is translated back as source code, i.e. a source code patch is generated. NOPOL provides a complete approach for fixing buggy if conditions and missing preconditions. It blends existing techniques with novel ideas. From the literature, we reuse: the idea of fixing buggy if conditions [15], the concept of artificially manipulating the program state during program execution [19, 9], the encoding of program synthesis as an SMT problem [8, 10]. The novelty of this paper lies in: - the design of a repair approach for a new kind of fault: missing preconditions; - an algorithm called “angelic fix localization” for identifying at once potential repair locations and repair oracles; - the extension of the SMT encoding for handling nullness and certain methods calls of object-oriented programs; - a case study of a real bug on a real-world large piece of software, the Apache Commons Math library: 5000 lines of executable code and 352 test cases (test methods in JUnit). 2. BACKGROUND 2.1 Test Suite based Program Repair Test-suite based program repair consists in repairing programs for which a test suite is available and for which at least one failing test case states the bug. This failing test case is either a regression bug or a new bug that has just been discovered. Then, repair algorithms search for patches that make the failing test pass while keeping the other test cases green. If such a patch is found, the patch fixes the bug and at the same time does not degrade the existing functionalities. Test-suite based program repair has been mostly disseminated by the work of Le Goues et al. on GenProg [7], and is an actively explored research area [15, 11]. 2.2 Oracle-based Program Repair Nguyen et al. [15] have invented the idea of oracle-based program repair. It decomposes program repair in two phases. First, oracle-based program repair looks for a pair \((l, v)\) that would fix the bug under repair as follows. If one uses the value \(v\) at source code location \(l\), the bug is fixed. Nguyen et al. defines two kinds of pairs \((l, v)\): (arithmetic assignment, arithmetic value) and (if conditions, boolean value). The former could state for instance “if one assigns 3 to variable \(x\) at line 42 of Foo.java, the bug is fixed”. The latter expresses “if the if condition at line 57 of Foo.java is falsified, the bug is fixed”. In Nguyen’s terms, the value \(v\) (3 and false in our examples) is called the oracle. We call this phase “oracle mining”. The second phase consists in finding a piece of code that, given the program state at location \(l\) would yield value \(v\). The patch is correct if and only if for all executions of location \(l\) (not only the buggy one), the synthesized expression outputs a correct value. We call this phase “repair synthesis”. In the context of test-suite program repair, repair synthesis means for all executions of location \(l\) by test cases, the synthesized expression must output a value that eventually enables the test case to pass. Note that a location can be executed \(n\) times in the same test case. In this case, the synthesized expression must output \(n\) values that, given all combinations and effects of the program, yield a passing test case. For oracle mining, Nguyen et al. [15] use symbolic execution: the oracle is a value that satisfies a set of constraints. For repair synthesis, they use oracle-guided program synthesis [10]. However, both phases can be implemented in different ways. For instance, in this paper, we replace symbolic execution with another technique (see Section 3.2). Repair synthesis can be based on constraint satisfaction or on evolutionary computation [7]. An interesting property of oracle-based program repair is to replace the concept of “fault localization” by the concept of “repair localization”. While “fault localization” emphasizes finding the root cause (the fault) to fix it, “repair localization” emphasizes finding places where a fix can be written. For instance, it may happen that there exists several different variables for which changing the initialization value fixes the bug. “Repair localization” is more pragmatic than fault localization: in real-world life there are many cases where the root cause can not be identified at a reasonable cost, but where it is sufficient to mitigate the error propagation. Note that one of the repair locations can indeed be the root cause of the fault. Each pair \((location, value)\) is a potential fix locations. If a patch can be correctly synthesized for all executions, it becomes an actual fix location. Indeed, there may be different locations for which there exists a value that fixes the bug. This corresponds to what developers know in their every day bug fixing activities: the very same bug can be fixed at different places. In particular, it occurs that a bug can be fixed at a place that is different from the root cause. In this case, one does not prevent the bug to appear, but one prevents the fault to be propagated. 2.3 Buggy if condition bugs Conditional statements (e.g., if (condition) {...} in Java), are widely-used in programming languages. Pan et al. [16] show that among seven studied Java projects, up to 18.6% of bug fixes have changed a buggy condition in if statements. For a buggy program, a buggy if condition may lead to a different branch. In this paper, our tool, NOPOL, is motivated by addressing fixing conditional bugs. A buggy if condition is defined as a bug in the condition of an if/then/else statement. Pan et al. [16] divide buggy if condition fixing into six sub-patterns, i.e., the addition/removal of a clause, a variable, an operator. The following example is a real example of buggy if condition found in Apache Commons Math library: \(gcd\) is a method of calculating the greatest common divisor between two integers. A condition in that method is to check if any of the two parameters \( u \) and \( v \) is equal to 0. In the buggy version, the developer compares the product of the two integers to zero. However, this may lead to an arithmetic overflow. A safer way to proceed is to compare each parameter to zero. Such fix is synthesizable by Nopol. ```java public static int gcd(int u, int v) { - if (u * v == 0) { + if ((u == 0) || (v == 0)) { We will explain how to fix buggy if condition bugs in Section 3 and two case studies of solving this kind of bugs can found in Sections 4.1 and 4.2. 2.4 Missing precondition bugs Another type of common bugs related to branching is missing preconditions. One of the usages of preconditions is to distinguish different values of a variable, e.g., detecting null pointers or invalid indexes in an array. Developers add preconditions to ensure the program meets their expectations of variables. We define a missing precondition as a bug without its proper preconditions. An example of missing preconditions is the absence of null-pointer detection as follows. The buggy version without if will throw an exception signaling a null-pointer at runtime. A case study of solving this kind of bugs can found in Section 4.3. ```java + if (directory != null) File[] files = directory.listFiles(); ``` 3. OUR APPROACH This section presents our approach for automatically repairing buggy IF conditions and missing preconditions in Java source code. Our approach blends existing ideas (such as encoding the program synthesis in SMT) with new ones. 3.1 Overview Nopol is a test-suite based program repair approach dedicated to incorrect IF conditions and missing precondition bugs. Nopol requires a test suite which represent the program expected functionality in which a failing test case represents the bug to be fixed. Nopol uses a novel technique to identify potential repair locations (angelic localization). For each repair location, Nopol runs the whole test suite in order to collect the context of the conditional expression and its expected value in each test. Then such information is used to generate an SMT formula modeling expressions which preserves the behavior of the expression for passing tests while modifying it for failing tests. If this SMT formula is satisfiable, the SMT solution is translated as a source code patch. Nopol supports a subset of object-oriented primitives (nullness and certain method calls). For instance, Nopol can output: Fix found! At line 348 of file Foo.java, replace if (a>b) by if (l!=null & l.size()<0) 3.2 Angelic fix localization As presented in Section 2.2, oracle-based program repair needs pairs \( \{\text{location}, \text{value}\} \). SemFix [15] uses symbolic execution for extracting those pairs. In Nopol, we propose to use value replacement [9] for this. Value replacement [9] comes from fault localization research. It consists in replacing at runtime one value by another one. More generally, the idea is to artificially change the program state for locating faults. There are a couple of papers that explore this idea. For instance, Zhang et al. [19] call it “predicate switching” and Chandra et al. [4] use the term “angelic debugging”. In this paper, we call "angelic fix localization" the technique of modifying the program state to find angelic pairs \((l, v)\). Definition (angelic value) An angelic value is a value that is arbitrarily set during test execution by an omniscient angel and that enables a test to pass. Angelic fix localization gives two key pieces of information: first, where a fix point may exist (a fix consists of a piece of code put at a certain fix point), second the expected value for driving the repair synthesis. Our key insight is that the search space of angelic values is small for two kinds of bugs and locations: the buggy IF conditions and the missing preconditions. 3.2.1 For Buggy IF Conditions For buggy IF conditions, angelic fix localization works as follows. For each IF condition that is evaluated during the test suite execution, anangel forces the IF condition to be evaluated to true or false in the failing test case embodying the bug to be fixed. For a pair of (IF condition, boolean value), if the failing test case now passes, it means that we have found a potential fix location and an oracle for SMT program repair. Algorithm 1 is the pseudocode of this algorithm. One really sees the parallel with angelic debugging: when the failing test case executes, anangel comes and forces the IF condition to be evaluated to either true or false, i.e. forces the program to take one or the other branch. It may happen that the same IF condition is executed several times in the same test case. In this case, angelic fix localization considers the expression always yield the same value. Doing so, the search space of angelic fix localization algorithm for buggy IF conditions is small. Let \( n \) be the number of IF conditions that are executed in the failing test case. The search space is simply \( 2 \times n \) (checking independently \( n \) binary values). In practice, according to our experience with open-source test suites, a test case executes around \( 10^3 \) to \( 10^2 \) if's and rarely more than 10^3. Note that our angelic fix localization algorithm for buggy IF conditions only requires to run the failing test case, not the entire test suite. ### 3.2.2 For Missing Preconditions The angelic fix localization for missing preconditions is slightly different from angelic fix localization for IF conditions. For each statement\(^1\) that is evaluated during the test suite execution, an angel forces to skip it. If the failing test case now passes, it means that a potential fix location has been found. The oracle for repair is then “False”, meaning that the precondition to be synthesized must output False (i.e. the statement should be skipped). Algorithm 2 is the pseudocode of this algorithm. Again, it is only a potential fix location. The repair synthesis may fail to find an expression which evaluates to false in all test cases but the failing one. Similarly to angelic fix localization for buggy IF conditions, if a statement is executed several times in the same test case, angelic fix localization completely skips it. The size of the search space of angelic fix localization for missing preconditions is simply the number of executed statements. For each of them, the failing test case is run once with one statement skipped. ### 3.2.3 Discussion Nopol uses an existing fault localization technique to increase the likelihood of finding an oracle. The statements are not skipped randomly, but according to their “suspiciousness”. The suspiciousness of a statement measures its likelihood of containing a fault. Nopol uses the Ochiai spectrum based metric [1] for that purpose. Given a program and a test suite, the suspiciousness \( \text{susp}(s) \) of a statement \( s \) is defined as follows. \[ \text{susp}(s) = \frac{\text{failed}(s)}{\sqrt{\text{total}_\text{failed} \times (\text{failed}(s) + \text{passed}(s))}} \] where \( \text{total}_\text{failed} \) denotes the number of all the failing test cases and \( \text{failed}(s) \) and \( \text{passed}(s) \) respectively denote the number of failing test cases and the number of passing test cases, which cover the statement \( s \). IF conditions are also evaluated with the angel based on suspiciousness. We rank all the IF conditions based on their suspiciousness. The angel first manipulate the most suspicious executed if (resp. statement), then the second one, etc. If no angelic pair can be found, it may mean two things. First, if the IF condition (resp. statement) is executed only once in the failing test case, it means that we know for sure that it is impossible to fix the bug by changing this particular condition (resp. adding a precondition before this statement). Second, if the IF condition (resp. statement) is executed more than once in the failing test case (say \( p \) times), there may exist a sequence of \( p \) angelic values (say true, true, false, true) resulting in a passing test case. However, recall that Nopol assumes that, for a given test case, \( p \) is large. How does this affect the effectiveness of the tool? According to our experiments with real test suites, most IF conditions are evaluated only once per test case. A systematic empirical study on this point is future work. SemFix uses symbolic execution for oracle mining and symbolic execution is known to be heavyweight. In their paper [15], they apply it on small examples. We have applied it to real bugs, one being presented in Section 4. We do not have access to the code of SemFix, so we cannot measure their execution time against ours. ### 3.3 Runtime Trace Collection for Repair Once Nopol has found an angelic pair (location, value), it collects the values that are accessible at this point in the program execution. Those value are meant to be used to synthesize a correct patch. There are different kinds of data to be collected. #### 3.3.1 Primitive Type Data Collection At the location of an angelic pair, Nopol collects the values of all local variables, method parameters and fields that are typed with a basic primitive type (integer, float, boolean, etc.). They form the core of \( C_{l,m,n} \) the set of collected values at location \( l \) during the \( m \)-th execution of the \( n \)-th test case. \( C_{l,m,n} \) is also enriched with constants for further use during synthesis. In order to be able to synthesize conditions that use literals (e.g. “if \( x>0 \)”), we create a set of predefined constants to be passed to the SMT solver afterwards. We have set so far two strategies for creating the set of predefined constants. The first one consists of \{0, -1, 1\}, it is a baseline that fixes many conditional bugs related to emptiness and off-by-one. The second one consists of collecting all numerical literals of the codebase. Assessing the effectiveness of both strategies is out of the scope of this paper. #### 3.3.2 Expected Outcome Data Collection Let us call \( O \) the set of expected outcomes in order to pass all tests. \( O_{l,m,n} \) is the expected outcome at location \( l \) during the \( m \)-th execution in order to pass the \( n \)-th test case. For buggy IF conditions, \( O_{l,m,n} \) is the expected outcome of the condition expression \( l \). For failing test cases, the expected outcome is the angelic value. For passing test cases, the expected outcome is the actual one, i.e. the result of the evaluation of the actual IF condition expression. \[ O_{l,m,n} = \begin{cases} \text{eval}(l) & \text{for passing test cases} \\ \text{angelic value for failing test cases} & \text{otherwise} \end{cases} \] For missing preconditions, \( O_{l,m,n} \) is the expected value of the precondition of statement \( l \), i.e. true if all cases but the failing test cases. The latter comes from angelic fix localization: if the precondition returns false for the failing test case, the buggy statement is skipped and the test case passes. \[ O_{l,m,n} = \begin{cases} \text{true} & \text{for passing test cases} \\ \text{false} & \text{for failing test cases} \end{cases} \] NOPOL collects \( O_{l,m,n} \) for all execution of location \( l \). 3.3.3 Object-oriented Specific Data Collection NOPOL aims at supports the automatic repair of IF conditions and missing preconditions of object-oriented programs. In particular, we would like to support not-null checks and method calls to some extent. For instance, we would like to be able to synthesize the following missing precondition. \[ \text{if (l!=null && l.size()>0)} \\n\text{compute(l);} \\ + \] To do so, in addition to collecting all values of primitives types, NOPOL collects two kinds of information. First, the nullness of object encodes whether all objects of the current scope are null or not. Second, NOPOL collects the output of “state query methods” as defined as the methods that enable one to inspect the state of objects and are side-effect free. For instance, methods \( \text{size()} \) and \( \text{isEmpty()} \) on collections are state query methods. NOPOL is manually fed with a list of such methods. The list is set with domain-specific knowledge. For instance, in Java, it is easy for developers to identify such side effect free state query methods on core library classes such as String, File and Collections. For each type \( T \), those predefined methods are denoted \( sqm(T) \). NOPOL collects the nullness and the evaluation of state query methods for all objects in the scope (local variable, method parameter, fields) of an angelic pair. 3.3.4 Repair Equation The repair synthesis of buggy IF conditions and missing preconditions consists in finding an expression (a function) \( \text{exp} \) such that \[ \forall_{l,m,n} \text{ exp}(C_{l,m,n}) = O_{l,m,n} \] 3.3.5 Number of Collected Values Let us assume there are \( j \) primitives values and \( k \) objects (denoted \( O \)) in the scope of an angelic pair. In total, NOPOL collects the following values: - the \( k \) boolean values corresponding to the nullness of each object; - \( \sum_{o \in O} |sqm(\text{type}(o))| \) values corresponding to the evaluation of the state query methods of all objects available in the scope; - the constants. All this information is used for finding a solution satisfying Equation 1. There are different ways of finding such as solution. NOPOL, as SemiFix [15], uses a variation of oracle-guided component-based program synthesis [10] based on SMT. 3.4 Encoding Repair in SMT We now present how we encode Equation 1 as an SMT problem. The solution of the SMT problem is then translated back as a boolean source code expression \( \text{exp} \) representing the correct IF conditional or the missing precondition. Our encoding extends the SMT encoding defined in [15, 10]. In particular, we explicitly take into account the type of the variables so that a boolean expression can mix operations on booleans, integers and real. 3.4.1 Building Blocks We define a building block (called component in [10]) as a type of expression that can appear in the boolean expression to be synthesized. For instance, the logical comparison operator “\( > \)” is a building block. As building block types, we consider comparison operators \((>, <, \neq, \leq, \geq)\) and the three arithmetic operators \((+,-,\times)\) and the boolean operators \((\land, \lor, \neg)\). The same type of building blocks can appear multiple times in the same expression. We define the \( i \)-th building block \( b_i \) as a tuple of input variables \( I_i \), an output variable \( r_i \), an expression \( \phi_i(I_i, r_i) \) encoding the meaning of the building block (e.g. \( r_i = l.I_i \)). That is \( b = (\phi_1(I_1, r_1), I_1, r_1) \). \( r \) is the return value of the synthesized expression, hence there exists one building block \( i \) whose output is bound to the return value \( r_i = r_{\text{final}} \). Suppose we are given a set \( B \) of building blocks and a list \( CO \) of pairs \((C_{l,m,n}, O_{l,m,n})\) (the collected values and angelic oracles at location \( l \) during the \( m \)-th execution of the \( n \)-th test case). \( C_{l,m,n} \) includes values of different type: boolean, integer or real expressions. A patch is a sequence of building blocks \( < b_1, b_2, \ldots, b_k > \) with \( b_i \in B \), whose input values are either taken from \( C_{l,m,n} \) or from other building blocks. 3.4.2 Wiring The problem is thus to wire the input of the building blocks \( < b_1, b_2, \ldots, b_k > \) to the input values of the program \( I_0 \) or to other building block’s output values and to make sure that one building block produces the expected output value \( r \) (the angelic value). Compared to previous work [10, 15], we need to make sure that the type of the variables are valid operands (i.e. that an arithmetic operator only manipulates integers, etc.). Let us assume \( C_{l,m,n} \) having two values “False” (boolean) and “3” (integer), and two building blocks: \( BOOL \leftarrow f_1(BOOL) \) and \( BOOL \leftarrow f_2(\text{INT},\text{INT}) \). The synthesis consists of finding a well formed expression combining \( False, f_1(False), f_2(3,3), f_1(f_2(3,3)) \) which evaluates to \( r \) the angelic value. \(^4\)Adding the division is possible but would require specific care to avoid division by zero. 3.4.3 Mapping Inputs and Outputs with Location Variables We define \( I = \bigcup I_i \) and \( O = \bigcup \{ r_i \} \) the sets of input and output values of the building blocks. Let \( I_0 \) be the input \((C_{l,m,n})\) and \( r \) the output in the final patch. We define \( IO \) as \( IO = I \cup O \cup I_0 \cup \{ r \} \). We partition the variables of \( IO \) according to their type in BOOL, INT and REAL. The SMT encoding relies on the creation of location variables \( L = \{ l_x | x \in IO \} \) representing an index of elements \( x \in IO \). Value variables \( V = \{ v_x | x \in IO \} \) represent the values taken by those elements. Let \( m \) be the number of possible inputs \( m = |I_0| + |B| \). Location variables are of type integer \((L \subset \text{INT})\). Value variables are of any supported type (boolean, integer or real). Location variables are invariants for all test case execution \( C_{l,m,n} \): they represent the patch structure. Value variables are used internally by the SMT solver to ensure that the semantic of the program is preserved. 3.4.4 Constraints Let us first define the domain constraints over the location variables. The location variables of the elements of \( I_0 \) and \( r \) are fixed: \[ \phi_{\text{FIXED}}(I_0, r) = \sum_{i=1}^{|I_0|} l_{i0,i} = i \land l_i = m \] The location variables of the elements of \( O \) have a domain of \([|I_0| + 1, |I_0| + m]\): \[ \phi_{\text{OUTPUT}}(B) = \sum_{i=1}^{|B|} x < l_i \leq m \] **Handling typing** Only the locations corresponding to the values of the same type are allowed. Suppose that \( \text{type}(x) \) returns the set of elements with the same type than \( x \) among BOOL, INT and REAL. Then we can restrict the values taken by the location variables of the input values of building blocks using the following formula: \[ \phi_{\text{INPUT}}(I) = \bigwedge_{x \in I, y \in \text{type}(x), x \neq y} (l_x = l_y) \] In our example, for a single input value of type integer, we have the following domains for location variables: \[ l_{i0} = 1 \] // input value, integer \[ l_{i1} = 2 \] // boolean constant False \[ l_{i2} = 3 \] // integer constant 3 \[ l_r = 5 \] // expected output value, boolean \[ l_{r1} \in [4,5] \] // output of \( f_1 \), boolean \[ l_{r2} \in [4,5] \] // output of \( f_2 \), boolean \[ l_{r1} \in \{l_{rvalue}, l_{r1}, l_{r2}\} \] // param. of \( f_1 \), boolean \[ l_{r2} \in \{l_{r}, l_{r2}\} \] // first param. of \( f_2 \), integer \[ l_{r2} \in \{l_{r}, l_{r2}\} \] // second param. of \( f_2 \), integer The following additional constraints are used to control the values of the location variables. First, we need to make sure that there is only one building block output per building block input (wires are one-to-one). \[ \phi_{\text{CONS}}(L, O) = \bigwedge_{x,y \in O, x \neq y} l_x \neq l_y \] Second, we need to order the building blocks in such a way that its arguments have already been defined. \[ \phi_{\text{ACYC}}(B, L, I, O, B) = \bigwedge_{(\phi_i, l, r, i) \in B \land v_i \in I} l_i < l_r \] Putting all together: \[ \phi_{\text{WFF}}(B, L, I, O, I_0, r) = \phi_{\text{FIXED}}(I_0, r) \land \phi_{\text{OUTPUT}}(B) \land \phi_{\text{INPUT}}(I) \land \phi_{\text{CONS}}(L, I) \land \phi_{\text{ACYC}}(B, L, I, O) \] An assignment of \( L \) variables respecting the predicate \( \phi_{\text{WFF}}(B, L, I, O, I_0, r) \) corresponds to a syntactically correct patch. Values variables corresponding to the input and output of a building block are related according to its functional definition using a predicate \( p_{h} \) such that \( p_{h}(\text{values}(I_0), v_{\alpha}) = \text{true} \) if \( b_{i}(I) = r_{i} \). Let \( V_{IO} = \{ v_x | x \in I \cup O \} \). \[ \phi_{\text{LIB}}(B, V_{IO}) = \bigwedge_{(l, r) \in B, v \in V_{IO}} p_{h}(V_{IO}(I), v_{r}) \] The location and the values variables are connected together using the following rule which states that elements at the same location should have the same value. Note that because in our case the input or output values can be of different types, we need to limit the application of that rule to values of the same type. That limitation to the elements of the same type is valid because the domain of the locations are managed using constraints \( \phi_{\text{INPUT}}(I) \). \[ \phi_{\text{CONN}}(L, V_{IO}) = \bigwedge_{x,y \in \{\text{BOOL, INT, REAL}\}} l_x = y \rightarrow v_x = v_y \] The semantic of the patch for a given input \( I_0 \) and a given output \( r \) is preserved using the following existentially quantified constraint: \[ \phi_{\text{FUNC}}(L, C_{l,m,n}, O_{l,m,n}) = \exists V_{IO} \phi_{\text{CONN}}(L, V_{IO}) [\text{values}(I_0) \leftarrow C_{l,m,n}, v_{\alpha} \leftarrow O_{l,m,n}] \land \phi_{\text{LIB}}(B, V_{IO}) \] Here the notation \( \alpha[v_{\alpha} \leftarrow O_{l,m,n}] \) means that the value of the variable \( v_{\alpha} \) in \( \alpha \) has been set to \( O_{l,m,n} \). Finally, finding a patch which satisfies all expected input/output pairs \((C_{l,m,n}, O_{l,m,n})\) requires to satisfy the following constraint: \[ \phi_{\text{PATCH}}(L, I, O, CO) = \exists L[\bigwedge_{(C_{l,m,n}, O_{l,m,n}) e CO} \phi_{\text{FUNC}}(L, C_{l,m,n}, O_{l,m,n})] \land \phi_{\text{WFF}}(B, L, I, O) \] 3.4.5 Levels Ideally, one would feed SMT with many instances of all kinds of building blocks we are considering (see 3.4.1). Only the required ones would be wired to the final result. This is not efficient in practice. Some building blocks require computationally expensive theories (e.g. multiplication). We first try to synthesize an expression with only one instance of easy building blocks \((<, \neq, =, \leq)\). Then, we add new building blocks (logic, then arithmetic) and eventually we \( \geq \) and \( > \) are obtained by symmetry: \( a \geq b = b \leq a \) increase the number of instances of building blocks. This is encoded in arbitrary “levels” as Semfix does [15]. The optimization of those predefined levels is future work. 3.5 Deriving a Patch from an SMT model If the problem is satisfiable, the SMT solver provides an assignment to all location variables. Here is a possible answer for our running example: \( l_0 = 1, l_1 = 2, l_2 = 3 \), \( l_{f,1} = 4, l_{r_1} = 5, l_{r_2} = 6, l_{i,1} = 3, l_{i,2} = 1, l_{r,1} = 1 \). The corresponding source patch is obtained with a backward traversal starting at the output location. There often exists building blocks which are wired (SMT produces a line number), but that are not connected to the final output of the expression. In our example, this reads that the output is bound to line 5 which is the output of \( f_2 \). \( f_2 \) takes as parameter line 1 which is the integer input value \( i_0 \). The final patch is thus the expression \( f_2(i_0, i_0) \) which returns a boolean. It is the repair of the bug, i.e. the fixed IF condition (or the missing precondition). In this example, \( f_1 \) is never used. 4. EVALUATION Nopol focuses on repairing conditional bugs in Java. In this section, we evaluate our approach with three case studies. First, we repair the running example of [15]; then, we repair a real-world conditional bug from the Apache Commons Math library; finally, we show how to repair a missing precondition bug on an artificial example. Our prototype implementation of Nopol uses the Spoon library [17] for manipulating Java source code (angelic value mining, instrumentation, final patch synthesis and assessment) and the GZoltar fault localization to order repair locations [3]. Nopol generates SMTLIB files using jSMTLIB [5] and we use CVC4 [2] as SMT solver. Thanks to the generic file format, Nopol can be used with any SMTLIB 2.0 compliant SMT solver. 4.1 Case Study: Tcas Example from SemFix We first take a classical program, Tcas, which was used as example in previous work (SemFix [15]). Tcas, a traffic collision avoidance system\(^6\), is a program consisting of 135 lines of code, which originates from Software-artifact Infrastructure Repository (SIR) [6]. Figure 1 shows a code snippet of Tcas. For this code snippet, five test cases are listed in Table 1. <table> <thead> <tr> <th>Test case</th> <th>Input inhibit</th> <th>Input up_sep</th> <th>Input down_sep</th> <th>Expected output</th> <th>Observed output</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>true</td> <td>0</td> <td>100</td> <td>0</td> <td>0</td> <td>pass</td> </tr> <tr> <td>2</td> <td>true</td> <td>11</td> <td>110</td> <td>1</td> <td>0</td> <td>fail</td> </tr> <tr> <td>3</td> <td>false</td> <td>100</td> <td>50</td> <td>1</td> <td>1</td> <td>pass</td> </tr> <tr> <td>4</td> <td>true</td> <td>-20</td> <td>60</td> <td>1</td> <td>0</td> <td>fail</td> </tr> <tr> <td>5</td> <td>false</td> <td>0</td> <td>10</td> <td>0</td> <td>0</td> <td>pass</td> </tr> </tbody> </table> As shown in Figure 1, the faulty line in this code snippet is line 4, where \( \text{down} \_ \text{sep} \) should be fixed with \( \text{up} \_ \text{sep} = 100 \). To our own surprise, Nopol generated a radically different patch as follows. For this faulty code, Nopol first employs angelic fix localization to find candidate lines. In this snippet, both Line 3 and Line 7 have the same likelihood of fault locations. However, angelic fix localization states that only Line 7 is subject to repair (there exists angelic values for which the failing test cases pass). Nopol then collects testing traces by running the whole test suite. Finally, the testing trace is encoded as an SMT problem and Nopol outputs the following patch as solution to the bug: ```java 1 int is_upward_preferred(boolean inhibit, 2 int up_sep, int down_sep) { 3 int bias; 4 if(inhibit) { 5 bias = down_sep; //fix: bias=up_sep+100 6 return 1; 7 } else { 8 bias = up_sep; 9 } 10 if (bias > down_sep) { 11 return 0; 12 } else { 13 return 1; 14 } 15 } ``` Figure 1: The buggy method “is_upward_preferred” of Tcas Table 1: Test suite with five test cases for Tcas In this section, we present how Nopol can be used for Line 7 which is the output of \( f_4 \). \( f_4 \) is the output of \( f_2 \). \( f_2 \) is the output of \( f_1 \). \( f_1 \) is the output of \( f_0 \). The final patch is thus \( f_0(f_1, f_2, f_3, f_4) \). Among these four building blocks, comparison operators in Java are defined from \( f_1 \) to \( f_4 \), e.g., \( f_1 \) and \( f_2 \) denote < and \( \leq \), respectively. 4.2 Case Study: Commons-Math Library In this section, we present how Nopol is able to repair a real bug\(^7\) in Apache Commons Math. Apache Commons Math is a lightweight library for common mathematics and statistics problems.\(^8\) This library consists of 5000 lines of executable code and 352 test cases (each test case is encoded as a JUnit method). Figure 2 shows the buggy source code of its Percentile class. As its name suggests, Percentile returns an estimate of the \( p \)th percentile of the values stored in the array values. \(^6\)Available in http://sir.unl.edu/ \(^7\)See details, https://github.com/apache/commons-math/commit/232771b069dad089226b47a7875d0805f9f8ed927d \(^8\)http://commons.apache.org/proper/commons-math/ 1 public double evaluate(final double[] values, final double p){ ... 2 double n = values.length; ... 3 double pos = p * (n + 1) / 100; 4 double fpos = Math.floor(pos); 5 int intPos = (int) fpos; 6 double dif = pos - fpos; 7 double[] sorted = new double[n]; 8 System.arraycopy(values, 0, sorted, 0, n); 9 Arrays.sort(sorted); 10 if (pos < 1) //fix: if (pos >= n) 11 return sorted[0]; 12 if (pos > n) //fix: if (pos >= n) 13 return sorted[n - 1]; 14 double lower = sorted[intPos - 1]; 15 double upper = sorted[intPos]; 16 return lower + dif * (upper - lower); 17 } Figure 2: Code snippet of Percentile in Commons Math According to the documentation, the algorithm of Percentile is implemented as follows. Let \( n \) be the length of the (sorted) array. Compute the estimated percentile position \( \text{pos} = \frac{p \times (n+1)}{100} \) and the difference \( \text{dif} \) between \( \text{pos} \) and \( \text{floor} \text{pos} \). If \( \text{pos} \geq n \) return the largest element in the array; otherwise return the final calculation of percentile. Thus, Line 12 in Figure 2 contains a bug, which should be corrected as if \( \text{pos} \geq n \). Table 2 shows one failing test case exists for this bug and one of the 351 passing test cases. For the failing test case, an \textit{ArrayIndexOutOfBoundsException} exception is thrown in Line 15. Table 2: Test suite with one failed test case for Percentile <table> <thead> <tr> <th>Input</th> <th>Output evaluate(values, p)</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>0,1</td> <td>25</td> <td>0.0</td> </tr> <tr> <td>1,2,3</td> <td>75</td> <td>3.0</td> </tr> <tr> <td></td> <td>Exception</td> <td>fail</td> </tr> </tbody> </table> The building blocks for this program are as follows (the same as building blocks in Section 4.1, except replacing INT with REAL). \[ \begin{align*} (f_1): & f_{1,1} \leq f_{1,2}, f_{1,1} = (f_{1,1},\text{REAL}, f_{1,2},\text{REAL}), r_{f_1},\text{BOOL}) \\ (f_2): & f_{2,1} \leq f_{2,2}, f_{2,1} = (f_{2,1},\text{REAL}, f_{2,2},\text{REAL}), r_{f_2},\text{BOOL}) \\ (f_3): & f_{3,1} = f_{3,2}, f_{3,1} = (f_{3,1},\text{REAL}, f_{3,2},\text{REAL}), r_{f_3},\text{BOOL}) \\ (f_4): & f_{4,1} \neq f_{4,2}, f_{4,1} = (f_{4,1},\text{REAL}, f_{4,2},\text{REAL}), r_{f_4},\text{BOOL}) \\ \end{align*} \] Based on the above building blocks, the solution to SMT is \( f_2(n, \text{pos}) \). That is \( \text{pos} \geq n \) in Java. Thus, NOPOL can generate the expected patch \((if(\text{pos} \geq n))\). NOPOL’s angelic fix localization was able to spot a fixable IF condition after analyzing 11 candidate IF conditions. In contrast with the example of Section 4.1, this is a real-world use case. Based on a test suite of 352 test cases with only one failed test case, NOPOL finds a patch that matches the patch made by the developers. 4.3 Case Study: Missing Precondition Example In this section, we take an artificial example to show how to repair a missing-precondition bug with NOPOL. Figure 3 presents a method for extracting the folder of a given path. Its related test suite consists of two test cases: one for an absolute file path, in which case the method should return the enclosing folder; one for a local file, in which case an empty string is expected. The bug of this method is a missing precondition in Line 5. In the absence of this precondition, an \textit{ArrayIndexOutOfBoundsException} exception is thrown in Line 6. //In Linux, separator is "/" 1 private final String separator = File.separator; 2 public String extractFolder(String path) { 3 String result = ""; 4 int index = path.lastIndexOf(separator); 5 //fix: if(index > 0) 6 result = path.substring(0, index); 7 return result; 8 } Figure 3: Code snippet of an example of missing-precondition bug Table 3: Test suite with two test case for a missing precondition bug <table> <thead> <tr> <th>Test case</th> <th>Input path</th> <th>Output Expected</th> <th>Output Observed</th> <th>Status</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>/home/user/Path.java</td> <td>/home/user</td> <td>/home/user</td> <td>pass</td> </tr> <tr> <td>2</td> <td>File.java</td> <td>/home/user</td> <td>/home/user</td> <td>Exception fail</td> </tr> </tbody> </table> With NOPOL, the bug in Line 5 can be repaired. Building blocks for Line 5 are the same as those in Section 4.1. Then the solution to the SMT based on these above building blocks is \( f_1(\text{result.length()}, \text{index}) \). Thus, the result of NOPOL is as follows. \[ \begin{align*} + & \text{if(result.length()} < \text{index}) \\ \end{align*} \] Note that this repair is a little different from Line 5 in Figure 3. Since the length of result is 0, the repair by NOPOL is equivalent to Line 5. For this bug, NOPOL finds no candidate buggy IF conditions with angelic fix localization. When it studies whether the bug might be a missing precondition, it tries to skip all statements of the method. After adding an angelic precondition at Line 6, the failing test passes. Then, the runtime data is encoded as SMT and the SMT solution is converted as the final patch. 5. LIMITATIONS In this section, we list several limitations of our work. Single point of failure As most of the previous work in test-suite based program repair, our approach, NOPOL, can only deal with faulty programs, where there is only one fault. In the current version, we do not stack patches (the search space would be exponentially larger). Granularity of building blocks The ability to fix a bug heavily depends on the building blocks expressible in SMT. If \( B \) is too small, there are few chances to fix the bug, unless it is trivial. If \( B \) is too large, the generated patch may be too large and complex to be accepted by a developer since we have no warrantee of minimality. Improving the way we set the building blocks of \( B \) is in our research agenda. **Non-fixable bugs** Nopol cannot generate patches for all buggy \( if \) conditions and missing preconditions. The ability of patch generation is limited by the test suite. In our work, for a buggy \( if \) condition, we use angelic fix localization to flip the boolean value of the condition for the failing test cases. However, if no passing test case covers this flipped boolean value, a trivial repair would be to replace the conditional expression by a boolean constant. It means that to generate a non trivial patch for a buggy \( if \) condition, both boolean values of a condition should be covered by at least one passing and one failing test cases. **Conditions including methods with parameters** In our work, we can currently support the synthesis of conditionals with methods without parameters. Our approach cannot generate a patch if a method with parameters has to appear in a condition. For example, no patch can be generated for \( if ( \text{list.contains}(\text{object})) \) due to its parameter \( \text{object} \). The reason is that since we need to gather the possible values of the method call, we would need to evaluate that method for each possible parameter when running the tests. While it is feasible for methods with one parameter, methods with more parameters would induce a combinatoric step during the data collection phase. We plan to support methods with few parameters in the future by leveraging semantic analysis. **Limitations of SMT solvers** In the current state of our research, the performance and intrinsic limitations of SMT is not an issue. Given our set of constraints (as given by our encoding of the repair problem), the solver we use (CVC4) is sufficient for our needs (it takes a few seconds to fix the Apache Common Maths bug). However, our evaluation is preliminary. In particular, we anticipate one major problem: CVC4 does not have currently a full support for non linear arithmetic. This is problematic for synthesizing conditional expressions that use multiplication or division. We consider using another SMT solver (e.g Z3) in order to overcome this issue. ### 6. RELATED WORK Test-suite based program repair generates patches and examines patches with a given test suite. Le Goues et al. [7] propose GenProg, a test-suite based program repair approach using genetic programming. In GenProg, a program is viewed as an Abstract Syntax Tree (AST) while a patch is a newly-generated AST by weighting statements in the program. Based on genetic programming, candidate patches are generated via multiple trials. Then for each candidate patch, a given test suite is evaluated to identify the patch with all passing test cases. The role of genetic programming is to obtain new ASTs by copying and replacing nodes in the original AST. An evaluation on 16 C programs shows that GenProg can generate patches with an average success rate of 77 percent. Nguyen et al. [15] propose SemFix, a program repair approach via semantic analysis. In contrast to genetic programming in GenProg, SemFix generates patches by combining symbolic execution, constraint solving, and program synthesis. As mentioned in Section 2.2, SemFix generate constraints by formulating passed test cases and then solve such constraints via traversing a search space of repair expressions. Compared with GenProg, SemFix reports a higher success rate on C programs within less time cost. In this paper, we also focus on program repair by leveraging constraint solving and program synthesis. The key difference compared to SemFix is that Nopol is able to repair missing preconditions, a kind of fault that is not handled by SemFix. Kim et al. [11] propose Par, a repair approach using fix patterns representing common ways of fixing common bugs. These fix patterns can avoid the nonsensical patches due to the randomness of some mutation operators. Based on the fix patterns, 119 bugs are examined for patch generation. In this work, the evaluation of patches are contributed by 253 human subjects, including 89 students and 164 developers. Martinez and Monperrus [13] mine historical repair actions to reasoning future actions with a probabilistic model. Based on a fine granularity of abstract syntax trees, this work analyzes over 62 thousands versioning transactions in 14 repositories of open-source Java projects to collect probabilistic distributions of repair actions. Such distributions can be used as prior knowledge to guide program repairing. Program synthesis is a related topic to program repair. Program synthesis aims to form a new program by synthesizing existing program components. Jha et al. [10] mine program oracles based on examples and employs SMT solvers to synthesize constraints. In this work, manual or formal specifications are replaced by input-output oracles. They evaluate this work on 25 benchmark examples in program deobfuscation. Their following work [8] addresses the same problem by encoding the synthesis constraint with a first-order logic formula. The maximum size of constraint is quadratic in the number of given components. In our work, fault localization is used as a step to provide faulty statements. The goal of fault localization [6] is to rank suspicious statements (or blocks, classes) to find out the location of bugs. A general framework of fault localization is to collect program spectrum (a matrix of testing results based on a given test suite) and to sort statements in the spectrum with specific metrics (e.g., Tarantula [6] and Ochiai [1]). Among existing metrics in fault localization, Ochiai [1] has been evaluated as one of the most effective ones. In Ochiai, each statement is assigned with its suspiciousness value, which is the Ochiai index between the number of failed test cases and the number of covered test cases. ### 7. CONCLUSION In this paper, we propose Nopol, a test-suite based repair approach using SMT. We target two kinds of bugs: buggy \( if \) conditions and missing preconditions. Given a faulty program and its test suite, Nopol employs a specific fault localization technique, angelic fix localization, to find suspicious statements. For each candidate statement, Nopol collects test execution traces at this point of the program. Those traces are then encoded as an SMT problem and the solution to this SMT is converted into a patch for the faulty program. Preliminary results on a real-world bug in Apache Common Maths library and two artificial examples show that our approach can fix the bugs of our fault model: buggy \( if \) conditions and missing preconditions. In future work, we plan to evaluate our approach on more real-world bugs. We also wish to extend Nopol to fix bugs in conditionals of loop structures (while, for, etc.). Acknowledgments This work is partially supported by the INRIA Internships program and the CNRS delegation program. We would like to thank David Cok for giving us full access to jSMTLIB. 8. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00977798/file/NOPOL-Automatic-Repair-of-Buggy-If-Conditions-and-Missing-Preconditions-with-SMT.pdf", "len_cl100k_base": 12986, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 41583, "total-output-tokens": 15027, "length": "2e13", "weborganizer": {"__label__adult": 0.0002942085266113281, "__label__art_design": 0.00029468536376953125, "__label__crime_law": 0.00029468536376953125, "__label__education_jobs": 0.0008664131164550781, "__label__entertainment": 5.072355270385742e-05, "__label__fashion_beauty": 0.00013911724090576172, "__label__finance_business": 0.00016129016876220703, "__label__food_dining": 0.00026726722717285156, "__label__games": 0.0005850791931152344, "__label__hardware": 0.0006241798400878906, "__label__health": 0.0003647804260253906, "__label__history": 0.0002186298370361328, "__label__home_hobbies": 9.018182754516602e-05, "__label__industrial": 0.0002799034118652344, "__label__literature": 0.0002359151840209961, "__label__politics": 0.00020694732666015625, "__label__religion": 0.00037789344787597656, "__label__science_tech": 0.01214599609375, "__label__social_life": 8.183717727661133e-05, "__label__software": 0.00522613525390625, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00028896331787109375, "__label__transportation": 0.0004117488861083984, "__label__travel": 0.00018084049224853516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55332, 0.03452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55332, 0.32872]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55332, 0.84002]], "google_gemma-3-12b-it_contains_pii": [[0, 1215, false], [1215, 4694, null], [4694, 11118, null], [11118, 15945, null], [15945, 21531, null], [21531, 27653, null], [27653, 33513, null], [33513, 38923, null], [38923, 44548, null], [44548, 51548, null], [51548, 55332, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1215, true], [1215, 4694, null], [4694, 11118, null], [11118, 15945, null], [15945, 21531, null], [21531, 27653, null], [27653, 33513, null], [33513, 38923, null], [38923, 44548, null], [44548, 51548, null], [51548, 55332, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55332, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55332, null]], "pdf_page_numbers": [[0, 1215, 1], [1215, 4694, 2], [4694, 11118, 3], [11118, 15945, 4], [15945, 21531, 5], [21531, 27653, 6], [27653, 33513, 7], [33513, 38923, 8], [38923, 44548, 9], [44548, 51548, 10], [51548, 55332, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55332, 0.04301]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
50a99ffd2cb55d5068f986528bb4e1bf4ae9ab01
[REMOVED]
{"Source-Url": "https://www.sec.in.tum.de/~malkis/BeckerMalkisBussard-APracticalGenericPrivacyLanguage.pdf", "len_cl100k_base": 10472, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 50368, "total-output-tokens": 12815, "length": "2e13", "weborganizer": {"__label__adult": 0.0004353523254394531, "__label__art_design": 0.000640869140625, "__label__crime_law": 0.0019626617431640625, "__label__education_jobs": 0.0009126663208007812, "__label__entertainment": 0.00011688470840454102, "__label__fashion_beauty": 0.0002061128616333008, "__label__finance_business": 0.00115203857421875, "__label__food_dining": 0.0004119873046875, "__label__games": 0.0006198883056640625, "__label__hardware": 0.0008563995361328125, "__label__health": 0.0006799697875976562, "__label__history": 0.00030350685119628906, "__label__home_hobbies": 0.00013196468353271484, "__label__industrial": 0.0004925727844238281, "__label__literature": 0.0006313323974609375, "__label__politics": 0.001041412353515625, "__label__religion": 0.00035643577575683594, "__label__science_tech": 0.10308837890625, "__label__social_life": 0.00014901161193847656, "__label__software": 0.03173828125, "__label__software_dev": 0.85302734375, "__label__sports_fitness": 0.00015997886657714844, "__label__transportation": 0.0005254745483398438, "__label__travel": 0.00017702579498291016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47863, 0.01794]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47863, 0.29223]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47863, 0.86434]], "google_gemma-3-12b-it_contains_pii": [[0, 3063, false], [3063, 6305, null], [6305, 9556, null], [9556, 13462, null], [13462, 16607, null], [16607, 19924, null], [19924, 23471, null], [23471, 26565, null], [26565, 30174, null], [30174, 32717, null], [32717, 36138, null], [36138, 39240, null], [39240, 42492, null], [42492, 45759, null], [45759, 47863, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3063, true], [3063, 6305, null], [6305, 9556, null], [9556, 13462, null], [13462, 16607, null], [16607, 19924, null], [19924, 23471, null], [23471, 26565, null], [26565, 30174, null], [30174, 32717, null], [32717, 36138, null], [36138, 39240, null], [39240, 42492, null], [42492, 45759, null], [45759, 47863, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47863, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47863, null]], "pdf_page_numbers": [[0, 3063, 1], [3063, 6305, 2], [6305, 9556, 3], [9556, 13462, 4], [13462, 16607, 5], [16607, 19924, 6], [19924, 23471, 7], [23471, 26565, 8], [26565, 30174, 9], [30174, 32717, 10], [32717, 36138, 11], [36138, 39240, 12], [39240, 42492, 13], [42492, 45759, 14], [45759, 47863, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47863, 0.0]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
9f22f7209e2237ed750f035f941f204a6fd863b0
[REMOVED]
{"Source-Url": "http://oro.open.ac.uk/59819/8/59819.pdf", "len_cl100k_base": 8369, "olmocr-version": "0.1.49", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 38537, "total-output-tokens": 10373, "length": "2e13", "weborganizer": {"__label__adult": 0.0005550384521484375, "__label__art_design": 0.0009431838989257812, "__label__crime_law": 0.0008187294006347656, "__label__education_jobs": 0.0258941650390625, "__label__entertainment": 0.00016546249389648438, "__label__fashion_beauty": 0.0003592967987060547, "__label__finance_business": 0.03802490234375, "__label__food_dining": 0.0006542205810546875, "__label__games": 0.0009713172912597656, "__label__hardware": 0.0008554458618164062, "__label__health": 0.0011205673217773438, "__label__history": 0.0007672309875488281, "__label__home_hobbies": 0.00023818016052246096, "__label__industrial": 0.0010318756103515625, "__label__literature": 0.0008087158203125, "__label__politics": 0.0013751983642578125, "__label__religion": 0.0005559921264648438, "__label__science_tech": 0.03424072265625, "__label__social_life": 0.00038504600524902344, "__label__software": 0.01898193359375, "__label__software_dev": 0.869140625, "__label__sports_fitness": 0.0005207061767578125, "__label__transportation": 0.0010938644409179688, "__label__travel": 0.0004646778106689453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48538, 0.02336]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48538, 0.08717]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48538, 0.93303]], "google_gemma-3-12b-it_contains_pii": [[0, 897, false], [897, 3624, null], [3624, 7000, null], [7000, 10573, null], [10573, 13694, null], [13694, 16887, null], [16887, 21145, null], [21145, 21925, null], [21925, 24499, null], [24499, 27461, null], [27461, 29157, null], [29157, 31889, null], [31889, 35187, null], [35187, 38293, null], [38293, 41536, null], [41536, 44701, null], [44701, 47706, null], [47706, 48538, null]], "google_gemma-3-12b-it_is_public_document": [[0, 897, true], [897, 3624, null], [3624, 7000, null], [7000, 10573, null], [10573, 13694, null], [13694, 16887, null], [16887, 21145, null], [21145, 21925, null], [21925, 24499, null], [24499, 27461, null], [27461, 29157, null], [29157, 31889, null], [31889, 35187, null], [35187, 38293, null], [38293, 41536, null], [41536, 44701, null], [44701, 47706, null], [47706, 48538, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48538, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48538, null]], "pdf_page_numbers": [[0, 897, 1], [897, 3624, 2], [3624, 7000, 3], [7000, 10573, 4], [10573, 13694, 5], [13694, 16887, 6], [16887, 21145, 7], [21145, 21925, 8], [21925, 24499, 9], [24499, 27461, 10], [27461, 29157, 11], [29157, 31889, 12], [31889, 35187, 13], [35187, 38293, 14], [38293, 41536, 15], [41536, 44701, 16], [44701, 47706, 17], [47706, 48538, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48538, 0.15569]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
8dceaf0fd13f16c45f96fa5f3d5b9f30f957ed1c
[REMOVED]
{"Source-Url": "https://sites.fct.unl.pt/sites/default/files/synergy-vm/files/2013-middleware-silva.pdf", "len_cl100k_base": 9992, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 52097, "total-output-tokens": 12260, "length": "2e13", "weborganizer": {"__label__adult": 0.00033736228942871094, "__label__art_design": 0.0002522468566894531, "__label__crime_law": 0.00029349327087402344, "__label__education_jobs": 0.0003478527069091797, "__label__entertainment": 6.115436553955078e-05, "__label__fashion_beauty": 0.00014770030975341797, "__label__finance_business": 0.0001437664031982422, "__label__food_dining": 0.00028395652770996094, "__label__games": 0.0007276535034179688, "__label__hardware": 0.0015954971313476562, "__label__health": 0.00040531158447265625, "__label__history": 0.0002460479736328125, "__label__home_hobbies": 7.891654968261719e-05, "__label__industrial": 0.0003969669342041016, "__label__literature": 0.00019741058349609375, "__label__politics": 0.00023305416107177737, "__label__religion": 0.00046133995056152344, "__label__science_tech": 0.025299072265625, "__label__social_life": 6.4849853515625e-05, "__label__software": 0.00661468505859375, "__label__software_dev": 0.9609375, "__label__sports_fitness": 0.0003364086151123047, "__label__transportation": 0.000537872314453125, "__label__travel": 0.0001983642578125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54677, 0.02021]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54677, 0.45831]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54677, 0.92056]], "google_gemma-3-12b-it_contains_pii": [[0, 2453, false], [2453, 5920, null], [5920, 9009, null], [9009, 12049, null], [12049, 14358, null], [14358, 16320, null], [16320, 19742, null], [19742, 23085, null], [23085, 26043, null], [26043, 27775, null], [27775, 30822, null], [30822, 33938, null], [33938, 37268, null], [37268, 40374, null], [40374, 43609, null], [43609, 45252, null], [45252, 46819, null], [46819, 48904, null], [48904, 51275, null], [51275, 54677, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2453, true], [2453, 5920, null], [5920, 9009, null], [9009, 12049, null], [12049, 14358, null], [14358, 16320, null], [16320, 19742, null], [19742, 23085, null], [23085, 26043, null], [26043, 27775, null], [27775, 30822, null], [30822, 33938, null], [33938, 37268, null], [37268, 40374, null], [40374, 43609, null], [43609, 45252, null], [45252, 46819, null], [46819, 48904, null], [48904, 51275, null], [51275, 54677, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54677, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54677, null]], "pdf_page_numbers": [[0, 2453, 1], [2453, 5920, 2], [5920, 9009, 3], [9009, 12049, 4], [12049, 14358, 5], [14358, 16320, 6], [16320, 19742, 7], [19742, 23085, 8], [23085, 26043, 9], [26043, 27775, 10], [27775, 30822, 11], [30822, 33938, 12], [33938, 37268, 13], [37268, 40374, 14], [40374, 43609, 15], [43609, 45252, 16], [45252, 46819, 17], [46819, 48904, 18], [48904, 51275, 19], [51275, 54677, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54677, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
fa24e4d22fbb21d5d16b6d8f3faa3a6401ca104d
October 2004 In this issue 3 SSL in WebSphere MQ 5.3 7 WebSphere Translation Server 10 MQ V5.3 for z/OS page set removal procedure 19 Setting up a WebSphere MQ Integrator Broker in a parallel Sysplex 31 Queue back-up and restore tool for Unix 43 Java Message Service, WebSphere Application Server, and Message Driven Beans 47 MQ news © Xephon Inc 2004 Published by Xephon Inc PO Box 550547 Dallas, Texas 75355 USA Phone: 214-340-5690 Fax: 214-341-7081 Editor Trevor Eddolls E-mail: trevore@xephon.com Publisher Nicole Thomas E-mail: nicole@xephon.com Disclaimer Readers are cautioned that, although the information in this journal is presented in good faith, neither Xephon nor the organizations or individuals that supplied information in this journal give any warranty or make any representations as to the accuracy of the material it contains. Neither Xephon nor the contributing organizations or individuals accept any liability of any kind howsoever arising out of the use of such material. Readers should satisfy themselves as to the correctness and relevance to their circumstances of all advice, information, code, JCL, scripts, and other contents of this journal before making any use of it. Subscriptions and back-issues A year’s subscription to MQ Update, comprising twelve monthly issues, costs $380.00 in the USA and Canada; £255.00 in the UK; $380.00 in the USA and Canada; £261.00 in Europe; £267.00 in Australasia and Japan; and £265.50 elsewhere. In all cases the price includes postage. Individual issues, starting with the July 2000 issue, are available separately to subscribers for $33.75 (£22.50) each including postage. Contributions When Xephon is given copyright, articles published in MQ Update are paid for at the rate of $160 (£100 outside North America) per 1000 words and $80 (£50) per 100 lines of code for the first 200 lines of original material. The remaining code is paid for at the rate of $32 (£20) per 100 lines. To find out more about contributing an article, without any obligation, please download a copy of our Notes for Contributors from www.xephon.com/nfc. MQ Update on-line Code from MQ Update, and complete issues in Acrobat PDF format, can be downloaded from our Web site at www.xephon.com/mq; you will need to supply a word from the printed issue. © Xephon Inc 2004. All rights reserved. None of the text in this publication may be reproduced, stored in a retrieval system, or transmitted in any form or by any means, without the prior permission of the copyright owner. Subscribers are free to copy any code reproduced in this publication for use in their own installations, but may not sell such code or incorporate it in any commercial product. No part of this publication may be used for any form of advertising, sales promotion, or publicity without the written permission of the publisher. Printed in England. SSL in WebSphere MQ 5.3 WebSphere MQ security is now enhanced with the introduction of support for Secure Sockets Layer (SSL), the Internet standard for secure communication. This article describes what SSL in WebSphere MQ 5.3 for z/OS offers and how to set it up. WHAT IS SSL? Secure Sockets Layer (SSL) technology is the standard Internet security protocol, designed to secure the data transmission over an insecure network. SSL makes use of digital certificates to establish the identity of the two parties that want to establish an SSL connection. With this process, typically referred to as SSL handshake, a secure confidential communications ‘pipe’ is created between these two entities. SSL basically addresses the following security issues: - Impersonation – SSL handshake allows the two parties involved to be sure of each other’s identity (identification and authentication service). - Eavesdropping – data transmitted is encrypted to ensure that someone in between doesn’t get access to the information sent (confidentiality service). - Tampering with information – hash functions are used to detect whether someone has intercepted and changed the information (data integrity service). Of course, the extent of the protection offered depends on whether the symmetric (secret) key or asymmetric (public and private key) approach is used. Also the length of the key influences this because it determines how quickly the key can be broken using a brute force approach. The standards key sizes (512, 768, and 1024 bit keys) provide low, medium, and high protection respectively (the larger the key size, the longer it takes to break!). WEBSPHERE MQ AND SSL SSL can be used to provide link level security with both MCA channels (for queue manager to queue manager communication) and MQI channels (for client applications connecting to a queue manager). A digital certificate has to be obtained for each queue manager and each client user ID that wishes to communicate over an SSL secured channel. The digital certificates are maintained in a key repository. The CipherSpec, which includes the encryption/decryption algorithm to be used by the SSL protocol, is specified in the channel definition. Whenever the channel is started, the certificate given to the queue manager is used to prove its identity. After this handshake (during channel start), the message exchanges are encrypted using the algorithm specified in the CipherSpec defined for the channel. SETTING UP WEBSPHERE MQ SSL IN Z/OS Setting up SSL tasks On z/OS, the number of server subtasks used for processing SSL calls is set up using the SSLTASKS parameter of the ALTER QMGR command. At least two server subtasks are required to use SSL channels. Though the range of values could be zero to 9999, IBM recommends that the SSLTASKS value doesn’t exceed 50 – otherwise it is expected to result in storage allocation problems. Associating key repository with queue manager Having a key repository at each end of the connection is a prerequisite for SSL. The key repository mainly contains: 1. CA certificates from various certification authorities, which allow the queue manager to verify the certificates from its partners (from the other end of the connection) to establish their identity. 2. Personal certificate received from a certification authority. Each queue manager and WebSphere MQ client gets associated with a single certificate (using which they establish their identity to the other partner). On z/OS, digital certificates are stored in a key ring that is managed by RACF (or other external security managers). Each queue manager must have access to a key repository. The steps to be followed in establishing this access are as follows: 1. Create a new key ring for the queue manager using the following command (userid is the user ID of the channel initiator address space): RACDCERT ID(userid) ADDRING(ring-name) 2. Connect the relevant CA certificates to it using the command: CONNECT(CERTAUTH LABEL('CA 1') RING(ring-name) USAGE(CERTAUTH)) 3. Use the SSLKEYR parameter on the ALTER QMGR command to associate a key repository with a queue manager. ALTER QMGR SSLKEYR(ring-name) You need to add the personal certificate obtained from CA to the key ring. The steps involved are: 1. Add the certificate to the RACF database, specifying a label that would be used to associate the digital certificates with the queue manager. On z/OS, WebSphere MQ uses the ibmWebSphereMQ prefix followed by the name of the queue manager for the label name: RACDCERT ID(userid) ADD(input-data-set-name) WITHLABEL('label-name') 2. Connect the personal certificate to the key ring created for the queue manager using: CONNECT(ID(userid) LABEL('label-name') RING(ring-name) USAGE(PERSONAL)) To know more about managing the certificates – adding new certificates, deleting certificates, or transporting from another key ring – refer to the RACDCERT command. In this context the only important point to note is that the changes to the certificates in the key ring and to the key repository attribute become effective only when the channel initiator is started or restarted. **Defining channels to use SSL** To use SSL, your channel must be defined accordingly using the three SSL parameters – SSLCIPH, SSLPEER, and SSLCAUTH – in the DEFINE CHANNEL command. Only the SSLCIPH parameter is mandatory if you want your channel to use SSL. The SSLCIPH parameter is used to specify the CipherSpec used on the channel. The CipherSpec actually determines the hash algorithm (MD5/SHA), encryption algorithm (AES/DES/3DES/RC2/RC4/none), and number of encryption bits. Obviously, the CipherSpec used should be the same on both ends of the channel – allowing the decryption of data encrypted by the other partner. For the list of all possible values of CipherSpec, refer to the *WebSphere MQ Script (MQSC) Command* reference manual. According to SSL protocol, it is mandatory for the SSL client (the initiating end of the channel) to obtain and validate the certificate of the SSL server during the SSL handshake, whereas the SSL client authentication is optional. In WebSphere MQ, SSL client authentication becomes mandatory under the following conditions: • Specifying SSLCAUTH as ‘REQUIRED’. • Specifying the SSLPEER parameter, which defines the filter used to compare the identified name of the certificate sent by the SSL client (peer). If the identified name received from the peer doesn’t match the one specified, the authentication fails and the channel does not start. Note that the SSL server always validates the client certificate if one is sent (even if SSLCAUTH is set as OPTIONAL). CONCLUSION The SSL support provided by WebSphere MQ provides authentication, message integrity checking, and data encryption for messages when they travel across the Internet. It is important to understand that link level security offered by WebSphere MQ protects messages while they are being transferred from one queue manager to another (especially useful when messages are transmitted over an insecure network). But this doesn’t include protection of the messages when they are stored in queues, and should be sufficient when the queue managers are running in a controlled and trusted environment. Sasirekha Cota Tata Consultancy Services (India) © Xephon 2004 WebSphere Translation Server With more and more businesses being international in nature, there will come a time when somebody in one of those companies is presented with a document in a language they do not understand. You can’t sell to someone if they don’t know what you’re offering, and you daren’t buy from someone if you don’t understand the contract! Step forward IBM’s WebSphere Translation Server for Multiplatforms, which is now at Version 5. This product provides machine translation, and is specifically geared towards companies that want to provide pages on their Web sites in the reader’s native language. And, of course, it is designed to do its job at a cost to the company that uses it that is much less than a room full of human translators. The product is also designed to perform its translation in real-time. The IBM WebSphere Translation Server for Multiplatforms 5.0 is a machine translation (MT) offering that can help companies remove language as a barrier to global communication and e-commerce. WebSphere Translation Server (WTS) enables enterprises to provide content in multiple languages in real time. Specifically designed for enterprise use, the WebSphere Translation Server allows companies to leverage their existing Web infrastructure to provide content to users in their native language, at a fraction of the cost of professional translation. WebSphere Translation Server (WTS) is based on IBM machine translation technology. It can run on a dedicated server, using Java Remote Method Invocation (RMI) and Java protocol to communicate with the WebSphere Application Server. In addition, Web page HTML content can be translation-enabled to support HTTP servers from Apache, IBM, Microsoft, or Netscape – hence the ‘Multiplatform’ part of its name. WTS consists of: - Machine translation engines for translating text from one language to another (eg French to English). - User Dictionary Manager tools, which allow specific words to be added to a domain. What this means is that slang or technical terms can be added as a way of tuning for a specific application. - Support for WebSphere Application Server (WAS) and HTTP servers from Apache, IBM, Microsoft, and Netscape. So, before you rush out and buy it, you probably want to know what languages the product can translate. The current list is: - English-to-French/French-to-English - English-to-German/German-to-English - English-to-Italian/Italian-to-English - English-to-Spanish/Spanish-to-English - English-to-Chinese (traditional)/Chinese (traditional)-to-English - English-to-Chinese (simplified)/Chinese (simplified)-to-English - English-to-Japanese/Japanese-to-English - English-to-Korean - English-to-Brazilian Portuguese. Could well be worth a look for companies that are trading (or trying to trade) in areas where those languages are spoken. Nick Nourse Independent Consultant (UK) © Xephon 2004 E-mail alerts Our e-mail alert service will notify you when new issues of MQ Update have been placed on our Web site. If you’d like to sign up, go to http://www.xephon.com/mq and click the ‘Receive an e-mail alert’ link. MQ V5.3 for z/OS page set removal procedure BACKGROUND The administration tasks to manage page sets are described in the IBM supplied *System Administration Guide*, specifically Chapter 10, ‘Managing page sets’. At one particular customer’s site, some page sets were hardly used and had to be removed. The addition of page sets is a well-known procedure, but their removal was not described in any of the manuals. This article aims to describe this procedure with the hope that IBM will add it to their manuals. PROCEDURE This is not for the faint-hearted because it involves removing recovery information from a system that is working fine, and effectively ‘cold’ starting it! The reasons for wanting and needing to remove page sets were: - Valuable disk space was being used. This is because even an empty page set uses up space because it is pre-formatted. - Part of the back-up and recovery procedure demanded that all page sets be backed up, which in the case of nearly ‘empty’ page sets means more wasted space and CPU cycles. The important part is to ensure that all queue definitions are moved from the page set(s) to be removed to another page set. As a reminder, this is the mapping used by MQ for z/OS: • Queue definition to storage class • Storage class to page set • Page set to buffer pool. It is assumed that the reader is already familiar with the way that queue definitions can be moved, but here is a quick summary: • Find an appropriate storage class or define a new one. • If messages reside on queues, back them up to a dataset or move them to ‘back-up’ queues. Ensure the back-up queues are not on the page sets to be emptied! • Alter the queue definition to use the new storage class. • Move any messages back onto the queue. • Finally, alter the original storage class to point to a different page set. Note that in some cases this may report an error message like ‘STGCLASS(XXXX) IS CURRENTLY IN USE’. Eventually, a stage will be reached where the page sets selected to be removed will be devoid of queues. Double-check this by issuing these commands: • DIS QL(*) PSID(n) – where \( n \) is the pageset id, eg 18. • DIS USAGE(*) An important part to understand is that MQ maintains recovery information within the page sets and even an empty page set needs a recovery point. This is true even if the page sets are ‘commented’ out of the start-up. The author prefers not to comment them out as they produce ‘OFFLINE’ warning messages and MQ is still ‘aware’ of them. One way to see how MQ maintains recovery information is to look at the joblog of the MQ master address space. At start- up, shutdown, and at periods of update activity, system checkpoints are taken. These checkpoints record the state of the system and include the so-called ‘recovery RBA’ of the pagesets. The recovery RBA is stored in several places: - Checkpoints on the log. - On every physical (4K) page that has been changed. One ‘special’ place is the first page (page 0) of each page set, which holds the lowest recovery RBA of all its pages and this is used at system recovery time in order to see whether recovery is required on that page set. To remove the recovery information for the page set being removed from the log, it is necessary to restart the queue manager with new logs. This can be safely done only if the queue manager has been shut down cleanly so that its entire state is recorded consistently on the page sets. In order to ensure that all recoverable resources were safely on the archived logs and no activity was missed, the following two-stage process was used. **SHUTDOWN PHASE 1** *Step 01 – stop the queue managers.* Use the `STOP QMGR MODE(FORCE)` command and check that it worked by ensuring there were no ‘in-doubt’ threads. What we’re trying to achieve is a 100% clean shutdown of MQ with all recoverable resources on the archived logs. At the client’s site, however, this was not possible because a number of applications (internal and external) had not been coded with the ‘FAIL-IF-QUIESCING’ option, which would tell the application that the system is shutting down. So in order to force applications to detach from the queue manager, we had no option but to use MODE(FORCE). *Step 02 – stop all external MQ activity.* This includes batch, CICS, IMS, and any automation products. SHUTDOWN PHASE 2 Step 03 – restart the queue managers. As soon as they’re up, stop the channel initiators. What we’re really doing here is preventing any clients getting on to the queue manager (if the CHIN won’t come down, CANCEL it). An alternative method to ensure no-one uses the system is to alter the connection rules in the external security manager. Step 04 – copy active logs onto the archived logs. Use the command ARCHIVE LOG. Step 05 – stop the queue manager(s) with MODE(QUIESCE) (if possible). Step 06 – double-check that archived logs were created as well as a back-up of the BSDS (see the archived ‘B’ datasets). Step 07 – back up the current set of page sets – all of them! During normal running, the normal back-up procedures ran while the queue managers were active (so-called ‘fuzzy’ back-ups) and was a multi-step job causing the page sets to be backed up in single-stream mode. At this point, however, in order to save time, each page set was backed up by its own job, which ran in parallel. Note: steps 8–11 can be done while step 7 is running. Step 08 – back up all the active logs and BSDS, plus their DUAL copies (eg via DFDSS). Step 09 – back up the contents of any shared queues using the command BACKUP CFSTRUCT(x) for each cfstruct(x) in the queue sharing group. Performing the back-up on another queue manager in the queue sharing group after the subject queue manager has been stopped will ensure that recovery from the back-up would not require any log data from the subject queue manager. Step 10 – delete and redefine the active log datasets and dual copies. This is required because MQ will have written certain ‘page set set control’ records, telling the system what RBA ranges are required for media recovery. This information is not wanted because we want to remove the page sets. Note: the archived log datasets (on cartridge) are left ‘as is’. **Step 11 – delete and redefine the BSDS and its dual copy.** This is required because it is an inventory of checkpoints and logs needed for recovery, and recovery is not what is wanted – provided, of course, the queue manager came down cleanly! **Step 12 – run the BSDS log change utility, CSQJU003.** This is required to store the names of the (new) active logs in the BSDS. It is, in fact, exactly the same job that was run when the queue manager was first set up. **Step 13 – remove the recovery information from each of the page sets (including the ones being removed).** Use the CSQUTIL command **RESETPAGE FORCE**. This can take a relatively long time as the utility has to alter each 4K page because each page has an associated recovery RBA. Note: steps 14–15 can be done while step 13 is running. **Step 14 – alter the MSTR start-up JCL by removing references to the page sets to be removed.** **Step 15 – alter the CSQINP1 members by removing references to the page-set-to-bufferpool mapping for the relevant page sets.** **Step 16 – restart the queue manager.** So what will happen at restart? Here are some excerpts from the MQ MSTR joblog that should be present: ``` 08.49.50 STC04353 CSQJ127I ?QMP1 SYSTEM TIME STAMP FOR BSDS=********** **********,** 08.49.52 STC04353 CSQJ001I ?QMP1 CURRENT COPY 1 ACTIVE LOG DATA SET IS 874 874 DSNNAME=QMP1.LOGCOPY1.DS01, STARTRBA=00000000000 ENDRBA=00002BF1FFFF 08.49.52 STC04353 CSQJ001I ?QMP1 CURRENT COPY 2 ACTIVE LOG DATA SET IS ``` 875 DSNAME=QMP1.LOGCOPY2.DS01, STARTRBA=0000000000000000 ENDRBA=0000000000000000 08.49.52 STC04353 CSQI099I ?QMP1 LOG RECORDING TO COMMENCE WITH 876 08.49.52 STARTRBA=0000000000000000 08.49.53 STC04353 CSQR001I ?QMP1 RESTART INITIATED 08.49.53 STC04353 CSQR003I ?QMP1 RESTART - PRIOR CHECKPOINT RBA=0000000000000000 08.49.53 STC04353 CSQR004I ?QMP1 RESTART - UR COUNTS - 910 910 IN COMMIT=0, INDOUBT=0, INFLIGHT=0, IN BACKOUT=0 08.49.53 STC04353 CSQI049I ?QMP1 Page set 0 has media recovery 911 911 RBA=0000000000000000, checkpoint RBA=FFFFFF000000 08.49.53 STC04353 CSQI049I ?QMP1 Page set 1 has media recovery 912 912 RBA=0000000000000000, checkpoint RBA=FFFFFF000000 etc – it is the same for the rest of the page sets. 08.49.54 STC04353 CSQR030I ?QMP1 Forward recovery log range 928 928 from RBA=0000000000000000 to RBA=0000000000000000 08.49.54 STC04353 CSQR005I ?QMP1 RESTART - FORWARD RECOVERY COMPLETE - 929 IN COMMIT=0, INDOUBT=0 08.49.54 STC04353 CSQR032I ?QMP1 Backward recovery log range 930 930 from RBA=0000000000000000 to RBA=0000000000000000 08.49.54 STC04353 CSQR006I ?QMP1 RESTART - BACKWARD RECOVERY COMPLETE - 931 INFLIGHT=0, IN BACKOUT=0 08.49.58 STC04353 CSQR002I ?QMP1 RESTART COMPLETED 08.49.58 STC04353 CSQP018I ?QMP1 CSQPBCWK CHECKPOINT STARTED FOR ALL BUFFER POOLS 08.49.58 STC04353 CSQP021I ?QMP1 DISPLAY THREAD(*) TYPE(INDOUBT) 08.49.58 STC04353 CSQP021I ?QMP1 Page set 0 new media recovery 935 935 RBA=0000000000000000, checkpoint RBA=0000000000000000 08.49.58 STC04353 CSQP019I ?QMP1 CSQP1DWP CHECKPOINT COMPLETED FOR 937 937 BUFFER POOL 2, 28 PAGES WRITTEN 08.49.58 STC04353 CSQP021I ?QMP1 Page set 1 new media recovery 938 938 RBA=0000000000000000, checkpoint RBA=0000000000000000 08.49.58 STC04353 CSQP019I ?QMP1 CSQP1DWP CHECKPOINT COMPLETED FOR 939 939 BUFFER POOL 3, 47 PAGES WRITTEN 08.49.58 STC04353 CSQP021I ?QMP1 Page set 2 new media recovery 940 940 RBA=0000000000000000, checkpoint RBA=0000000000000000 etc, followed by similar output for the rest of the page sets. **Step 17 – issue another **BACKUP CFSTRUCT **to establish a new point of recovery for the messages in the CF structures.** The output is similar for the other application structures. Step 18 – testing. Some suggestions follow. Check that: - Existing messages can still be accessed (browse). - New messages can be added and deleted (both via batch and on-line). Step 19 – update the regular page set back-up jobs and delete the removed page sets and their back ups. Do this only when the system has been up and running for a day or so. BACKOUT SCENARIO If for some reason the change had to be backed out, use the following procedure to keep the page sets that remained allocated and add the ‘old’ page sets back in (even though they are empty!). Step B1 – shut down the queue manager using a standard STOP FORCE. Step B2 – reintroduce the page sets’ DD statements to MSTR start-up JCL. Step B3 – reintroduce the DEF PSID commands to CSQINP1. Step B4 – start the queue manager. There is no need to go through all the previous steps because we want to retain the status of the page sets that remained allocated and allow MQ to resolve any recoveries. RECOMMENDATIONS I recommend that you: • Plan this operation carefully, including its back-out. • Work out the longest part of the operation, namely the back-up and ‘resetpage’ of the largest page set(s). • Test it out in a pre-development environment. • In a queue-sharing environment, perform the change on one queue manager at a time. In this way, persistent shared messages remain recoverable via the logs of other queue managers in the queue sharing group. • Keep one page set available for emergencies. • Do the change in three stages: stage 1 to move all queues and their contents off the page sets; stage 2 to remove the page sets from MQ; and stage 3 to physically remove the page sets after one or two days of successful runs. • Update the disaster recovery procedures and re-test. Contributing to *MQ Update* Why not share your expertise and earn money at the same time? *MQ Update* is looking for program code, JavaScript, REXX EXECs, etc, that experienced users of WebSphere MQ have written to make their life, or the lives of their users, easier. We are also looking for explanatory articles, and hints and tips, from experienced users. We would also like suggestions on how to improve MQ performance. We will publish your article (after vetting by our expert panel) and send you a cheque, as payment, and two copies of the issue containing the article once it has been published. Articles can be of any length and should be e-mailed to the editor, Trevor Eddolls, at trevore@xephon.com. A free copy of our *Notes for Contributors*, which includes information about payment rates, is available from our Web site at www.xephon.com/nfc. Setting up a WebSphere MQ Integrator Broker in a parallel Sysplex INTRODUCTION WebSphere MQ messages can be processed using IBM’s WebSphere MQ Integrator Broker product. The broker can be used for a wide range of message volumes because the broker is highly scalable by having multiple message flow instances using threads and by multiple processes when deploying the message flow to multiple execution groups. But there are cases where the volume of messages that can be processed on one computer is not high enough for the business need. In this case a second broker is needed on another computer to increase the throughput. But, in addition to throughput, there are also other reasons for running a ![Figure 1: WebSphere BI for FN Instance with two WebSphere MQ Integrator Brokers](image) message flow on another computer: for example the availability of the service provided by a message has to be increased, or the costs to access the message flow from another application running on another computer are too high. Such requirements arise for WebSphere Business Integration for Financial Networks (hereafter called WebSphere BI for FN). This product is designed for high message throughput and high availability. WebSphere BI for FN allows a configuration with message flows on multiple brokers as shown in Figure 1. Message Flow 1 in this figure could be the message flow that needs higher throughput or better availability. The other message flow could be used for administration purposes or for other low-volume message processing. The concepts described here for two brokers apply also to configurations with any higher number of brokers. The WebSphere BI for FN product is divided into a base part and network-specific extensions. WebSphere BI for FN Base provides functionality to deliver products on top of WebSphere MQ Integrator Broker and a set of common functions that are required by different product extensions, like auditing, security, or configuration. Configuration describes the run-time behaviour of a message flow. This requires shared queues to share the workload and a shared database to access the same data as shown in Figure 1. This article describes possible ways to set-up multiple WebSphere MQ Integrator Brokers for z/OS in a parallel Sysplex, and the decisions that have to be taken while defining the system. WEBSPHERE MQ Each WebSphere MQ Integrator Broker requires its own WebSphere MQ queue manager. Using WebSphere MQ there are three possible ways to set up queue managers in a parallel Sysplex. The first possibility is that the queue managers are unrelated, the queue managers can be part of a cluster of queue managers, and, thirdly, the queue managers can be part of a queue-sharing group. **Unrelated queue managers** If the queue managers are unrelated, each broker is processing just the workload that applications are addressing directly to it. With this system set-up, the volume of messages being processed can be increased, but there is no workload balancing and no failover in case one of the brokers, the queue managers, or an entire system, fails. **Queue manager clusters** A WebSphere MQ cluster, as shown in Figure 2, is a connection between two independent computers. This is possible on most WebSphere MQ-supported platforms including z/OS. In a cluster, each WebSphere MQ Integrator broker is connected to a separate WebSphere MQ queue manager. This allows each broker to process the workload that is addressed to its queue manager. Workload balancing between both brokers can be achieved if applications sending the ![Figure 2: A WebSphere MQ cluster](image) messages are connected to a separate queue manager, named Gateway QM in Figure 2, which is also part of the same WebSphere MQ cluster. The benefits of workload balancing come from the costs for remote messaging. These costs are processing costs and latency. The system works as follows. The application, Application 1, connects to the queue manager, Gateway QM1. It is sending a message to the input queue of the message flow. The address is just using the name of the queue with the queue manager name left blank. The queue manager Gateway QM1 automatically finds out the name of the queue manager where the queue is located. To be able to do this, the input queue of the message flow must be defined as a cluster queue. This way, the information about the availability of this queue is distributed to all queue managers in the cluster. For additional throughput and availability, a queue with the same name is defined on both queue managers, QM 1 and QM 2. Since this information is available to Gateway QM, this queue manager can decide which one to use. The default behaviour in such a case is that the queue manager selects the queue on the other queue manager, in a round-robin sequence. This workload distribution algorithm is only performed for all queues with the same name on queue managers that are up and running and connected to the cluster. As long as everything is up and running, the system works well. There are some drawbacks in the case of system failures or the failure of a broker. If a complete system fails, eg System 2 falls down, the queue manager cluster detects this and will not send any more messages to the system until it is started again. But messages that are already on System 2 at the time of the failure will no longer be processed. Also messages that have already been routed to the queue manager on System 2 will stay in the transmission queue to this system. They will not automatically be re-routed to any system that is active. The situation is much worse if just the broker on the system fails. In this case the Gateway QM1 continues to direct messages to the queue manager on System 2 even if they are not processed. In such a case, either an operator needs to shut down the queue manager or, if the broker can be recovered, the broker has to process a large backlog of messages. To overcome the problem of messages that are still in the queue and are not processed, you may configure the broker and the queue manager in a way that they can be restarted on another system, for example System 1. But this still has the problem of a potentially large backlog of messages that need to be processed at start-up time. Figure 2 also shows an optional gateway queue manager, Gateway QM2. This is not required for the availability of the message flows, but having a second gateway queue manager is a good choice to increase the availability and message throughput of sending and receiving applications. **Queue sharing groups** A system with better availability and workload balancing characteristics is a queue sharing group. Such a system is ![Figure 3: A WebSphere MQ sharing group](image) shown in Figure 3. What’s obvious when comparing this figure to Figure 2 is that no gateway queue manager is required. A queue-sharing group is available only on z/OS. All queue managers in the sharing group share some common resources, eg all queue managers must have access to common files where messages in shared queues can be stored. The applications are sending their messages directly into the input queue of the message flow. This queue must be defined as a sharing group queue in the queue managers. This way, the queue appears to be a single local queue that spans System 1 and System 2. The handling and coordination of messages across the systems is achieved by WebSphere MQ exploiting the coupling facility. The coupling facility can be thought of as shared memory between both systems. For the message flows, it appears as if each of their queue managers has a local queue from which they get their messages. Each message flow will take as many messages as it can process. This means that there is an automated workload distribution according to the processing capabilities of each system. This set-up has the advantage that if one broker fails, the message flow in the second broker can process all the messages, albeit with a lower throughput rate. If it’s just the broker that is failing, then System 2 is able to recover messages that were in-flight at the time of the failure, and these messages can then be processed on the remaining broker. The broker and the message flow in the available broker will not be aware of the fact that the second broker is no longer available and they do not have to take any action. The behaviour is similar if a complete system fails. This is detected by the remaining queue manager, which makes all the messages available to the remaining system and those messages can be processed. The only problem here might be for those messages that were processing when the system failed. If these messages cannot be recovered until System 2 is back, they get into an in-doubt status and cannot be processed. The implementation of the queue-sharing group has some limitations, for example the total amount of data that can be held in such queues and the maximum length of a message in a sharing group queue. The total amount of data depends on the available resources for the coupling facility that can be used by WebSphere MQ. These coupling facility resources may have to be shared with other resource managers, for example a database. Messages in a sharing group queue are currently limited in length to a maximum of around about 63KB. If a broker application consists of multiple message flows, like most WebSphere BI for FN extensions, you will need to check which queues really need to be sharing group queues. Only those that require high availability and fail-over capabilities should be selected to reduce to a minimum the required resources in the coupling facility. Based on the advantages and disadvantages of all methods, WebSphere BI for FN has decided to start with support of a queue manager cluster. The main reasons for this decision are that those messages, which are processed by the existing extensions, can exceed 63KB and the cluster approach also works on all other WebSphere BI for FN supported platforms. **DATABASE** If you are processing data with message flows in multiple brokers, then all processing should work with the same data. In distributed environments, this problem is usually solved by having one database, which is referenced by all brokers. The database for such a configuration could be on the system where one of the brokers reside or a separate system, as shown in Figure 4. ![Broker with data sharing group](image) *Figure 5: Broker with data sharing group* Such a system has the drawback that System 3 is a single point of failure. To decrease the impact on availability if this system fails, a standard mechanism (such as a cold standby Another drawback with this set-up is the communication costs involved for communicating from the broker systems to the database system. For a low volume of messages this may not be relevant, but, when processing high volumes of messages with many database interactions, the processing costs and latency introduced by a network could be significant. On z/OS there’s another possibility with DB2 (the only supported database on this platform). Similar to a queue sharing group, database subsystems can be organized into a data sharing group as shown in Figure 5. In such a configuration there’s a database running on each server but the databases can communicate with each other using a coupling facility and therefore process the same data in the same tables. Each database in the data-sharing group has its own subsystem id and its own ODBC data source name (DSN). Any application could use this name to connect to this member of the data sharing group as usual. The data-sharing group also has its own identification and hence its own DSN. Any request addressed to this DSN is then routed by the sharing group to a sharing group member that can process it. When setting up the broker in such an environment, the broker... tables could be located non-shared in the corresponding database subsystem that is located on the same system as the broker. For example Broker 1 would be customized to have its broker database tables in DSN1. Database tables for application message flows that are available to both brokers need to be shared so they are accessible by both brokers. They can be defined in any of the subsystems. The message flows accessing such shared tables would reference them using the DSN of the data-sharing group. Such a set-up is shown in Figure 6. WEBSPHERE MQ INTEGRATOR BROKER In addition to the underlying subsystems, some precautions also have to be taken for the broker. First of all, it must be assured that all processing components for the message flows are at the same level. In addition to the broker executables this must also be assured for all plug-ins and libraries added by the broker application – in this case WebSphere BI for FN. In general this could be assured by always installing the same level on both systems. This could be error prone so common data could not be assured. It should be possible to install the data on one system and also make this data available to all other systems, for example by using Network File System services (NFS). Such an approach has the disadvantage that this introduces a new single point of failure. If the system where the installation data is located fails, all the other systems are unable to work. z/OS supports the Hierarchical File System (HFS). This is a single dataset on a disk that can be accessed by multiple systems. Since all plug-in files and shared libraries used by WebSphere MQ Integrator Broker and WebSphere BI for FN are read-only, there are no sharing or locking problems. When designing WebSphere BI for FN, attention was paid to the fact that no files need to be accessed in write or update mode. Having all executables at the same level is also required for any other kind of program that is involved in the message processing. One example is that WebSphere BI for FN uses DB2 stored procedures to do some processing. Other examples could be programs invoked by a flow or application used to initiate messages or to process the result of the activities in the message flows. Not only the executables of the products have to be on the same level, but so do the message flows processing the messages. To ensure this, the flow should be administered in such a way that, after every change to a main flow or a sub-flow, all instances of the flow are brought to the new level. With WebSphere MQ Integrator Broker this can be achieved by doing a delta deploy on the topology level after any change to a message flow. Deploying identical message flows to different brokers is possible if the resources they access, mainly WebSphere MQ queue and DB2 tables, have the same names. This is achieved by the use of shared queues and the shared database. **SUMMARY** WebSphere Business Integration for Financial Networks has shown that it is possible to set up WebSphere MQ Integrator Broker in a parallel Sysplex. It was achieved with a WebSphere MQ cluster and data in a data-sharing group as shown in Figure 7. With this configuration higher throughput and higher availability are achieved. Nevertheless, use performance statistics to validate whether this is achieved with a reasonable processing overhead, compared with processing on a single server, are still outstanding. *Michael Groetzner* *IBM (Germany)* Code from individual articles of *MQ Update*, and complete issues in PDF format, can be accessed on our Web site, at: www.xephon.com/mq You will be asked to enter a word from the printed issue. Queue back-up and restore tool for Unix INTRODUCTION Sometimes it is useful to copy or move WebSphere MQ messages from one queue to another queue or to a file for later use. The stored messages may be used for development testing (to create the same input data several times) or for analysing off-line after a problem has occurred. It may also be used to extract test data from a production system, or to rescue messages when a queue manager has to be recreated – eg to increase the size of the log files. THE QUEUE BACK-UP AND RESTORE TOOL Intention for backupQ I have worked for several years within a heterogeneous WebSphere MQ environment. On z/OS there is a utility, CSQUTIL, to back up and restore queue contents. The Unix guys often asked for such a tool on their systems. So I created the program backupQ, to enable the Unix administrators to back-up and restore queue contents. How backupQ works The program backupQ will move or copy messages from one queue to another queue or to a file, and from a file back to a queue. The program has the following parameters: • -f ...: – a function with the following possible values: – q2q: – copy or move from an input queue to a target queue. – q2f: – copy or move from an input queue to a file. – f2q: – copy from a file to a queue. -i ...: – the name of the input queue or file. -0 ...: – the name of the output queue or file. -m ...: – optional; the name of the queue manager (if not configured as the default queue manager). -d: – optional; move messages instead of copying them (destructive get, not for input from a file and not in combination with option -r). -r first,last: – optional; range of the messages to copy or move (not in combination with option -d). If the last value is not set, all messages up to the end of the queue or file will be copied. Files created by backupQ always contain the message descriptor. This will be partially restored by backupQ when the messages are copied back to a queue. Partially means that attributes like persistency are preserved, whereas attributes like the put time and date will be set by WebSphere MQ. BackupQ creates a start and an end tag, to identify files created by itself. If these tags are missing, backupQ returns an error message. It is also possible to use manually-created plain text files, eg to create some test data. Such a file needs a manually-added start and an end tag (BACKUPQ_START_OF_FILE and BACKUPQ_END_OF_FILE respectively) in the first and the last line, to be used by backupQ. BackupQ then reads the contents, line by line, and puts each line as a new message on the queue. **Building the program backupQ** The following lines create the binary – on non-DCE platforms – from the file backupQ.c for AIX and Sun Solaris systems. I assume that the GNU compiler gcc is installed in `/usr/local/bin`. **AIX:** How to build the software on further platforms is described in the IBM document *WebSphere MQ Application Programming Guide*. **Installation of the queue back-up and restore tool** There is nothing to install, just copy the binary to your program search path. **EXAMPLES** **Example 1** Copy the whole contents of a queue to another queue. Leave the messages in the original queue. The program connects to the default queue manager: ``` backupQ -f q2q -i InputQueue -o OutputQueue ``` **Example 2** Move the whole contents of a queue to another queue. The messages in the original queue are deleted. The program connects to the default queue manager: ``` backupQ -f q2q -i InputQueue -o OutputQueue -d ``` **Example 3** Move the whole contents of a queue to a file. The messages in the queue are deleted. The program connects to the queue manager TESTQM: ``` backupQ -f q2f -i InputQueue -o OutputFile -d -m TESTQM ``` Example 4 Copy the whole contents of a file to a queue. The messages in the file are preserved. The program connects to the queue manager TESTQM (the option -d is not available in combination with a file as an input device): ``` backupQ -f f2q -i InputFile -o OutputQueue -m TESTQM ``` Example 5 Copy the 10th to 20th message of a file to a queue (this means 11 messages in total). The messages in the file are preserved. The program connects to the default queue manager (the option -d is not available in combination with a file as an input device or the option -r): ``` backupQ -f f2q -i InputFile -o OutputQueue -r 10,20 ``` Example 6 Copy the 30th to the last message of an input queue to an output queue. The messages in the input queue are preserved. The program connects to the queue manager TESTQM (the option -d is not available in combination with the option -r): ``` backupQ -f q2q -i InputQueue -o OutputQueue -r 30 -m TESTQM ``` Example 7 Copy the 10th message of an input to an output queue. The message in the input queue is preserved. The program connects to the default queue manager (the option -d is not available in combination with the option -r): ``` backupQ -f q2q -i InputQueue -o OutputQueue -r 10,10 ``` DESCRIPTION OF THE CODE The code consists of several parts. Global parameters Two groups of global parameters are used in the program. The parameters beginning with FUNC define the types of input and output device and whether read messages will be removed (this is called the function of the program): 62: /* Define some flags */ 63: #define FUNC_NONE 0 /* nothing to do */ 64: #define FUNC_Q2Q 1 /* copy or move from a queue to another queue */ 65: #define FUNC_Q2F 2 /* copy or move from a queue to a file */ 66: #define FUNC_F2Q 4 /* copy from a file to a queue */ 67: #define FUNC_MOVE 8 /* move messages instead of copying them */ The parameters beginning with FILE define some strings that are used as marks in the input or output file: 69: /* Define some file parameters */ 70: #define FILE_MD_HEADER "MQMD" /* message line contains a descriptor */ 71: #define FILE_MQ_START_TAG "BACKUPQ_START_OF_FILE" /* start tag of files */ 72: #define FILE_MQ_END_TAG "BACKUPQ_END_OF_FILE" /* end tag of files */ Function main The function main first calls the function check_args, which checks the command line parameters (line 877). If this check is successful, the program connects to the queue manager (line 881) and opens the input and output queue(s) or file (lines 894 and 903). Then the program calls the function copy_messages (line 913), which copies or moves the message(s) from the input to the output device. When the copying or moving has finished, the program closes the input and output devices (line 917) and disconnects from the queue manager (line 923). 858: int main(int argc, char **argv) 859: { ... 877: function = check_args(argc, argv, input, output, QMName, 878: &first_msg, &last_msg); 879: 880: /* Connect to queue manager. */ 881: MQCONN(QMName, /* queue manager */ 882: &Hcon, /* connection handle */ 883: &CompCode, /* completion code */ 884: &CReason); /* reason code */ ... 893: /* Open the input device. */ 894: open_input_device(function, input, Hcon, &OpenSrcCode, 895: &HobjSrc, &fp); ... 902: /* Open the output device. */ 903: open_output_device(function, output, Hcon, &OpenDestCode, 904: &HobjDest, &fp); ... 911: /* Copy or move messages from the input device to the */ 912: /* output device. */ 913: copy_messages (function, first_msg, last_msg, Hcon, input, 914: &HobjSrc, output, &HobjDest, fp); 915: 916: /* Close the open devices. */ 917: close_devices (function, Hcon, OpenSrcCode, &HobjSrc, OpenDestCode, 918: &HobjDest, fp); 919: 920: /* Disconnect from MQM if connected. */ 921: if (CReason != MQRC_ALREADY_CONNECTED ) 922: { 923: MQDISC(&Hcon, /* connection handle */ 924: &CompCode, /* completion code */ 925: &CReason); /* reason code */ ... 928: } 929: 930: exit(0); 931: } Function for checking the command line arguments The function check_args checks the command line parameters and displays an error message if invalid or duplicate parameters are used. Check_args returns a number, which describes the function of the program. For example the value: (bit wise OR) means, move messages from a queue to another queue. The messages in the source queue will be deleted. Functions for opening and closing input and output devices The functions `open_input_device` and `open_output_device` open the queue(s) or the file using the calls of `MQOPEN` (for a queue) or `fopen` (for a file), depending on the function of the program. If the input device is a file, it will be opened in read-only mode. If the output device is a file, it will be checked for whether it already exists. When it exists, the user will be asked whether the file should be overwritten or the messages should be appended to the existing file. The function `close_devices` closes the queue(s) or the file by calling the functions `MQCLOSE` and `fclose`. Function to copy or move messages The central function, which does most of the work, is `copy_messages`. This function contains a loop in which it reads messages from a queue or file (lines 639 and 652) and writes it back (lines 678 and 690). If no message range has been specified (parameter `first_msg` is equal to 0), any message is read from the input device (queue or file) and written to the output device. When the input device is a queue, the messages may be removed from the queue – if option `-d` has been passed to `backupQ` on the command line. Otherwise the messages are just browsed from the queue. When a message range is specified (parameter `first_msg` is greater than 0), only messages numbered between the parameters `first_msg` and `last_msg` are copied. The program stops reading after the last message, and exits (line 625). 589: static int copy_messages(int function, int first_msg, int last_msg, 590: MQCONN Hcon, char *input, MQOBJ *HobjSrc, char *output, 591: MQOBJ *HobjDest, FILE *fp) 592: { 620: /* Loop until an error occurs. */ 621: while (CompCode != MQCC_FAILED) 622: { 623: /* Read when all messages have to be read (first_msg is 0) */ 624: /* or the last message to copy or move has been reached. */ 625: if ((first_msg == 0) || (pos_count <= last_msg)) 626: { 636: /* Input device is a queue. */ 637: if (((function & FUNC_Q2Q) || (function & FUNC_Q2F)) 638: { 639: MQGET(Hcon, /* connection handle */ 640: &HobjSrc, /* object handle */ 641: &md, /* message descriptor */ 642: &gmo, /* get message options */ 643: buflen, /* buffer length */ 644: buffer, /* message buffer */ 645: &messlen, /* message length */ 646: &CompCode, /* completion code */ 647: &Reason); /* reason code */ 648: } /* Input device is a file. */ 649: else 650: { 651: read_from_file(fp, /* file handle */ 652: &md, /* message descriptor */ 653: buflen, /* buffer length */ 654: buffer, /* message buffer */ 655: &messlen, /* message length */ 656: &CompCode, /* completion code */ 657: &Reason); /* reason code */ 658: } 659: } 660: /* Write the read message again, when position counter is */ 661: /* greater than number of the first message. */ 662: if ((CompCode != MQCC_FAILED) && (pos_count >= first_msg)) 663: { 672: /* Put each buffer to the message queues. */ 673: if (buflen > 0) 674: { 675: /* Output device is a queue. */ 676: if (((function & FUNC_Q2Q) || (function & FUNC_F2Q)) 677: { Functions for file input and output I created two functions, read_from_file and put_to_file, which work similarly to the functions MQGET and MQPUT, but read or write to or from a file. First the function read_from_file tries to read a line header (line 483). It compares the read data with the global constant FILE_MD_HEADER (line 487). If the strings are not equal, the file is interpreted as a manually- created test file. Then the function rewinds the file pointer and reads a complete line out of the file (lines 494 and 497). In lines 499 and 500 a trailing newline character is replaced by a NULL character. If the strings that have been compared in line 487 are equal, the function branches to the else part (line 511). In this case the file is interpreted as a data file that was created previously by backupQ. The function now reads the characters up to the next colon. This string is interpreted as the length of the stored message descriptor and is written into the array len (lines 525 to 533). The following bytes up to the next colon are interpreted as the length of the message, and this length is also written into the array len in the same way. Now the message descriptor and the message itself are read (lines 537 and 542). The length of the message is returned by copying it from the array len to the parameter messlen (line 539). In line 545, the trailing newline character is read – just to set the file pointer. When the read string contains the end mark of the file (line 549), the completion and reason codes are set to MQCC_FAILED and MQRC_NO_MSGAVAILABLE, to satisfy the check in the calling function copy_messages (line 551 and 552). The write function put_to_file is much easier than the read function. It first creates a string with a line mark and the sizes of the message descriptor and the message, separated by colons (line 568). Then it writes the message descriptor (line 571) and the message itself (line 574). 467: static void read_from_file(FILE *fp, MQMD *md, MQLONG buflen, 468: MQBYTE *buffer, MQLONG *messlen, MQLONG *CompCode, MQLONG *Reason) 469: { ... 482: /* Try to read a header string. */ 483: fread (header, 1, sizeof(FILE_MD_HEADER), fp); 484: header[sizeof(FILE_MD_HEADER) - 1] = 0; 485: 486: /* Look for message descriptor mark. */ 487: if (strncmp(header, FILE_MD_HEADER, sizeof(FILE_MD_HEADER) - 1) 488: != 0) md_flag = FALSE; /* Message descriptor mark not found, file is manually created. */ if (md_flag == FALSE) { /* Rewind the file pointer. */ fseek (fp, -sizeof(FILE_MD_HEADER), SEEK_CUR); /* Read one line from the file. */ if (fgets(buffer, buflen, fp) != NULL) { if (buffer[strlen(buffer) - 1] = ' ' buffer[strlen(buffer) - 1] = '\0'; *messlen = strlen(buffer); } /* File is created by this program (contains a message descriptor). */ else /* Read the beginning of the line (contains the lengths of the */ /* message descriptor and the message itself, separated by colons). */ for (idx = 0; idx < 2; idx++) { c = 0; buf[0] = 0; while (c != ': c = getc(fp); if (c != ': sprintf (buf, "%s%c", buf, c); len[idx] = atol(buf); } /* Read the message descriptor. */ fread (md, 1, len[0], fp); *messlen = len[1]; 541: /* Read now the message. */ 542: fread(buffer, 1, *messlen, fp); 543: 544: /* Get the newline character (to set the file pointer). */ 545: c = getc(fp); 546: 547: 548: /* Check for an end tag. */ 549: if (strncmp(buffer, FILE_MQ_END_TAG, strlen(FILE_MQ_END_TAG)) == 0) 550: { 551: *CompCode = MQCC_FAILED; 552: *Reason = MQRC_NO_MSG_AVAILABLE; 553: } 554: } ... 560: static void put_to_file(FILE *fp, MQMD *md, MQLONG messlen, MQBYTE *buffer, 561: MQLONG *CompCode, MQLONG *Reason) 562: { ... 567: /* Write a mark and the lengths of message descriptor and message. */ 568: num = fprintf(fp, "%s:%ld:%ld:", FILE_MD_HEADER, sizeof(MQMD), 569: messlen); 570: /* Write the message descriptor to the file. */ 571: num += fwrite(md, 1, sizeof(MQMD), fp); 572: /* Write the message itself to the file. */ 573: num += fwrite(buffer, 1, messlen, fp); ... 587: } LISTING OF BACKUPQ.C The full listing of the program backupQ is too long to be written here. The program code may be downloaded from Xephon’s Web site at www.xephon.com/extras/backupQ.c. Parts of the listing are shown in the text above. Hubert Kleinmanns Senior Consultant N-Tuition Business Solutions AG (Germany) © Xephon 2004 This article looks at Java Message Service, WebSphere Application Server, and Message Driven Beans. The good news is that Java Message Service (JMS) is now easier to install, administer, and use. It is designed to provide a robust asynchronous messaging model, which will deliver services for constructing high-performance, encapsulated, portable, and transactional applications. WebSphere Application Server (WAS) 5.0 is Java 2 platform Enterprise Edition (J2EE) 1.3 compliant – which means that it comes with an integrated JMS provider. WAS 5.0 provides support for both the JMS point-to-point and publish/subscribe messaging models. WAS 5.0 also offers full support for Message Driven Beans (MDBs). MDBs are part of the Enterprise Java Bean (EJB) 2.0 specification, and their role is to provide asynchronous messaging using base JMS functionality. It was difficult to build messaging applications using J2EE before MDBs were introduced as part of J2EE 1.3. MDBs are good for application developers because they delegate the responsibility of providing infrastructure for transactions, security, and concurrently processing messages to the EJB container. So let’s take a look at each of these in detail. If you want to access asynchronous messaging systems from Java applications you need an API to do it. JMS is a Sun Microsystems Java specification that defines just such an API. As a consequence, JMS provides an asynchronous messaging model with services for constructing encapsulated, portable, and transactional applications that are high performing. Importantly, it is an integral part of the J2EE 1.3 specification. Remember that WebSphere Application Server (WAS) is now J2EE 1.3 compliant. WAS 5.0 now includes an integrated JMS Provider with the product, called the WebSphere JMS Provider. This delivers the enhanced JMS integration with the application server. Customers no longer need to purchase MQSeries 5.2 or WebSphere MQ 5.3 for this functionality as they did with WAS 4.0; and even then, it only delivered the back-end messaging support of the provider – the JMS implementation was obtained through a separate download/install of the corresponding MQ classes for Java and MQ classes for Java Message Service LPP. There was then a long period of setting everything up. WAS 5.0 now includes: - WebSphere MQ 5.3 – providing the back-end messaging server used to receive, store, and send asynchronous messages. - WebSphere MQ Classes for Java and JMS 5.3 – providing Java classes and interfaces used to access the back-end messaging server through JMS. These products are installed by default during the WAS 5.0 installation process and provide the underlying JMS Provider by WAS. All access is performed through the WAS user interfaces (eg the administrative console), so users don’t need to interact directly with these products. Licensing restrictions mean that a full WebSphere MQ licence has to be purchased if applications don’t integrate with WAS 5.0 JMS applications. Otherwise, for example, as long as a WebSphere 5.0 JMS application receives and processes these messages, an RPG application could use these products to send messages to a WebSphere MQ queue using the native APIs of the product. In the same way, an RPG application could use the native WebSphere MQ APIs to receive messages sent by a WebSphere 5.0 JMS application. The JMS server provides an integrated JMS security service. This (not surprisingly) allows JMS resources to be secured. It is similar to the security service used with other types of J2EE component (such as servlets, JSPs, and EJBs) in WAS. The JMS server can be found in the WAS 5.0 run-time. It interacts with the WebSphere JMS Provider and the application server run-time. It runs in the application server process/job, although if the WAS 5.0 instance is part of a Network Deployment cell, the JMS server runs as a separate process/job and interacts with the application server process. The JMS server is used by the application server to interact with the WebSphere JMS Provider to send/receive and publish/subscribe JMS messages. For backward compatibility, JMS through WAS 5.0 continues to support the point-to-point messaging models. JMS resources, such as connection factories, queues, topics, and message listeners, are administered by the WAS administration tools using the JMS server. There are now standard WAS administrative interfaces for this – the WAS administrative console and the WAS administrative scripting engine (wsadmin). This means that JMS resources can now be administered in the same way as other resources (JDBC drivers, data sources, etc) using these common interfaces. Moving on to Message Driven Beans (MDBs), WAS 5.0 fully supports MDBs, which are part of the EJB 2.0 specification. They are a specialized form of session beans that wrap a JMS resource such as a queue or topic. In essence, an MDB gets activated when a message arrives and listens to a message destination or a message endpoint. MDBs are anonymous in nature and cannot be directly invoked by a client. An MDB is invoked by sending a message to the destination or endpoint to which it is listening. An MDB does not have interfaces like other types of EJB; it only has a bean-implementation class. MDBs implement two interfaces – one is an EJB interface and the other is a JMS interface. The MDB bean class has to implement the javax.ejb.MessageDrivenBean interface. It must also implement the message listener interface required by the messaging type that it supports. An MDB that supports JMS must implement the javax.jms.MessageListener interface. MDBs provide asynchronous messaging using base JMS functionality. They are server-side components and are basically stateless session beans. They are used to process JMS messages, although they are capable of participating in global transactions. Part of the MDB implementation is the message listener service. One or more listeners associated with a given MDB are controlled and monitored by a listener manager in WAS 5.0. As soon as an incoming JMS message arrives, it is passed on for processing. The listener then goes back to listening – there is no waiting. MDBs are thread-safe and capable of receiving many messages from various applications and processing them simultaneously. MDBs are only accessible via asynchronous messages and can’t be accessed via standard EJB methods such as the Java RMI/IIOP API. Without WAS 5.0 there’s no JMS. Without JMS there’s no MDB. Without MDB there’s no efficient processing of asynchronous messages. Nick Nourse Independent Consultant (UK) © Xephon 2004 DataPower Technology has announced the XI50 Integration Appliance, a networking device that makes XML and non-XML data usable for mainframes, Enterprise Service Buses (ESBs), and application integration. The product supports a range of transport protocols, including MQ Series, and can perform translations between formats other than XML. XI50 can parse and transform arbitrary binary, flat text, and XML messages, including COBOL CopyBook, CICS, ISO ASN.1, and EDI. For further information contact: DataPower Technology, One Alewife Center, 4th Floor, Cambridge, MA 02140, USA. Tel: (617) 864 0455. * * * Compuware has announced Version 3.1 of STROBE, which is designed to help users improve the efficiency of their applications. STROBE 3.1 provides support for WebSphere Application Server, complementing its existing support for Java, enabling users to manage and improve the performance of Java and WebSphere applications. It also provides information on how Java and WebSphere applications interact with CICS, DB2, and other z/OS facilities. For further information contact: Compuware, One Campus Martius, Detroit, MI 48226, USA. Tel: (313) 227 7300. * * * IBM has announced CICS Interdependency Analyzer for z/OS Version 1.3, which is used with CICS Transaction Server on a mainframe to identify the resources used by CICS transactions and the relationships between them. The product also reports on WebSphere MQ, DB2, and IMS resources that are used by CICS. The main resources that are identified include those associated with transactions, programs, BMS maps, files, temporary storage queues, transient data queues, 3270 Bridge facility, Web Services, CorbaServer, and Enterprise JavaBeans. For further information contact your local IBM representative. * * * IBM has announced WebSphere Business Integration Modeler Version 5, which can help companies establish a more detailed map of the flow of business processes across their IT systems. This helps to identify slowdowns, and respond faster to customer demand and changing market conditions. WebSphere Business Integration Modeler provides support for WebSphere Business Integration Server Foundation, WebSphere MQ message queueing software, and Rational Rose XDE development tools. Customers can work with existing content based on standards like XML, and extend it using WebSphere Business Integration Modeler’s simulation and modeling capabilities. For further information contact your local IBM representative.
{"Source-Url": "http://www.cbttape.org/xephon/xephonq/mq0410.pdf", "len_cl100k_base": 15392, "olmocr-version": "0.1.49", "pdf-total-pages": 47, "total-fallback-pages": 0, "total-input-tokens": 83237, "total-output-tokens": 18527, "length": "2e13", "weborganizer": {"__label__adult": 0.0002675056457519531, "__label__art_design": 0.0002999305725097656, "__label__crime_law": 0.0002233982086181641, "__label__education_jobs": 0.0008254051208496094, "__label__entertainment": 0.00010222196578979492, "__label__fashion_beauty": 0.00010132789611816406, "__label__finance_business": 0.0024566650390625, "__label__food_dining": 0.0001928806304931641, "__label__games": 0.000762939453125, "__label__hardware": 0.0020961761474609375, "__label__health": 0.0001804828643798828, "__label__history": 0.00019097328186035156, "__label__home_hobbies": 0.00011414289474487303, "__label__industrial": 0.0005354881286621094, "__label__literature": 0.0001862049102783203, "__label__politics": 0.00014460086822509766, "__label__religion": 0.0002601146697998047, "__label__science_tech": 0.03179931640625, "__label__social_life": 7.522106170654297e-05, "__label__software": 0.10888671875, "__label__software_dev": 0.849609375, "__label__sports_fitness": 0.00017404556274414062, "__label__transportation": 0.0003998279571533203, "__label__travel": 0.00018274784088134768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67684, 0.07084]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67684, 0.20756]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67684, 0.88729]], "google_gemma-3-12b-it_contains_pii": [[0, 357, false], [357, 2877, null], [2877, 4440, null], [4440, 5892, null], [5892, 7441, null], [7441, 9132, null], [9132, 10605, null], [10605, 12521, null], [12521, 13373, null], [13373, 14595, null], [14595, 15999, null], [15999, 17706, null], [17706, 19364, null], [19364, 21106, null], [21106, 23243, null], [23243, 23660, null], [23660, 25018, null], [25018, 25934, null], [25934, 26805, null], [26805, 28665, null], [28665, 29644, null], [29644, 31795, null], [31795, 32778, null], [32778, 34834, null], [34834, 35762, null], [35762, 36717, null], [36717, 37943, null], [37943, 39935, null], [39935, 40784, null], [40784, 41626, null], [41626, 42924, null], [42924, 44478, null], [44478, 45409, null], [45409, 46788, null], [46788, 48497, null], [48497, 49955, null], [49955, 51753, null], [51753, 53681, null], [53681, 54088, null], [54088, 56082, null], [56082, 57052, null], [57052, 58308, null], [58308, 59873, null], [59873, 61679, null], [61679, 63661, null], [63661, 64939, null], [64939, 67684, null]], "google_gemma-3-12b-it_is_public_document": [[0, 357, true], [357, 2877, null], [2877, 4440, null], [4440, 5892, null], [5892, 7441, null], [7441, 9132, null], [9132, 10605, null], [10605, 12521, null], [12521, 13373, null], [13373, 14595, null], [14595, 15999, null], [15999, 17706, null], [17706, 19364, null], [19364, 21106, null], [21106, 23243, null], [23243, 23660, null], [23660, 25018, null], [25018, 25934, null], [25934, 26805, null], [26805, 28665, null], [28665, 29644, null], [29644, 31795, null], [31795, 32778, null], [32778, 34834, null], [34834, 35762, null], [35762, 36717, null], [36717, 37943, null], [37943, 39935, null], [39935, 40784, null], [40784, 41626, null], [41626, 42924, null], [42924, 44478, null], [44478, 45409, null], [45409, 46788, null], [46788, 48497, null], [48497, 49955, null], [49955, 51753, null], [51753, 53681, null], [53681, 54088, null], [54088, 56082, null], [56082, 57052, null], [57052, 58308, null], [58308, 59873, null], [59873, 61679, null], [61679, 63661, null], [63661, 64939, null], [64939, 67684, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, true], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67684, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67684, null]], "pdf_page_numbers": [[0, 357, 1], [357, 2877, 2], [2877, 4440, 3], [4440, 5892, 4], [5892, 7441, 5], [7441, 9132, 6], [9132, 10605, 7], [10605, 12521, 8], [12521, 13373, 9], [13373, 14595, 10], [14595, 15999, 11], [15999, 17706, 12], [17706, 19364, 13], [19364, 21106, 14], [21106, 23243, 15], [23243, 23660, 16], [23660, 25018, 17], [25018, 25934, 18], [25934, 26805, 19], [26805, 28665, 20], [28665, 29644, 21], [29644, 31795, 22], [31795, 32778, 23], [32778, 34834, 24], [34834, 35762, 25], [35762, 36717, 26], [36717, 37943, 27], [37943, 39935, 28], [39935, 40784, 29], [40784, 41626, 30], [41626, 42924, 31], [42924, 44478, 32], [44478, 45409, 33], [45409, 46788, 34], [46788, 48497, 35], [48497, 49955, 36], [49955, 51753, 37], [51753, 53681, 38], [53681, 54088, 39], [54088, 56082, 40], [56082, 57052, 41], [57052, 58308, 42], [58308, 59873, 43], [59873, 61679, 44], [61679, 63661, 45], [63661, 64939, 46], [64939, 67684, 47]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67684, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
4e0c2bb239b0e06dfb67de8d30a76a10446bd8e6
Improving Automatic Source Code Summarization via Deep Reinforcement Learning Abstract—Code summarization provides a high level natural language description of the function performed by code, as it can benefit the software maintenance, code categorization and retrieval. To the best of our knowledge, most state-of-the-art approaches follow an encoder-decoder framework which encodes the code into a hidden space and then decode it into natural language space, suffering from two major drawbacks: a) their encoders only consider the sequential content of code, ignoring the tree structure which is also critical for the task of code summarization; b) their decoders are typically trained to predict the next word by maximizing the likelihood of next ground-truth word with previous ground-truth word given. However, it is expected to generate the entire sequence from scratch at test time. This discrepancy can cause an exposure bias issue, making the learnt decoder suboptimal. In this paper, we incorporate an abstract syntax tree structure as well as sequential content of code snippets into a deep reinforcement learning framework (i.e., actor-critic network learning). The actor network provides the confidence of predicting the next word according to current state. On the other hand, the critic network evaluates the reward value of all possible extensions of the current state and can provide global and lookahead guidance for explorations. We employ an advantage reward composed of BLEU metric to train both networks. Comprehensive experiments on a real-world dataset show the effectiveness of our proposed model when compared with the state-of-the-art ones. I. INTRODUCTION In the life cycle of software development (e.g., implementation, testing and maintenance), nearly 90% of effort is used for maintenance, and much of this efforts are spent on understanding the maintenance task and related software source codes [1]. Thus, documentation which provides a high level description of the task performed by code is always a must for software maintenance. Even though various techniques have been developed to facilitate the programmer during the implementation and testing of software, documenting code with comments remains a labour-intensive task, making few real-world software projects adequately document the code to reduce future maintenance costs [2], [3]. It’s nontrivial for a novice programmer to write good comments for source code. A good comment should at least have the following characteristics: a) Correctness. The comments should correctly clarify the intent of code. b) Fluency. The comments should be fluent natural language that maintainers can easily read and understand. c) Consistency. The comments should follow a standard style/format for better code reading. Code summarization is a task that tries to comprehend code and automatically generate descriptions directly from the source code. The summarization of code can also be viewed as a form of document expansion. Successful code summarization can not only benefit the maintenance of source codes [4], [5], but also be used to improve the performance of code search using natural language queries [6], [7] and code categorization [8]. Motivation. Recent research has made inroads towards automatic generation of natural language descriptions of software. Currently, most of existing code summarization methods learn the semantic representation of source codes based on statistical language models [9], [4], and then generate comments based on templates or rules [10]. With the development of deep learning, some neural translation models [11], [5], [12] have also been introduced for code summarization, which mainly follow an encoder-decoder framework. They generally employ recurrent neural networks (e.g., LSTM) to encode the code snippets and utilize another recurrent neural network to decode that hidden state to coherent sentences. These models are typically trained to maximize the likelihood of the next word on the assumption that previous words and ground-truth are given. These models are limited from two aspects: a) the code sequential and structural information is not fully utilized on feature representation, which is critical for code understanding. For example, given two simple expressions “f=a+b” and “f=c+d”, although they are quite different as two lexical sequences, they share the same structure. In programming analysis, the structure of code is always represented by abstractive syntax trees, as shown in Figure 1a. b) these models, also termed “teacher-forcing”, suffer from the exposure bias since in test time the ground-truth is missing and previously generated words from the trained model distribution are used to predict the next word. Figure 1 presents a simple illustration of the discrepancy among training and testing process in these classical encoder-decoder models. In the testing phase, this exposure bias makes error accumulated and makes these models sub-optimal, not able to generate those words which are appropriate but with low probability to be drawn in the training phase. Contribution. In this paper, we aim to address these two mentioned issues. To effectively capture the syntactic (or structure) information of code snippets, we employ abstract syntax tree (AST) [13], a data structure widely used in compilers, to represent the structure of program code. Figure 1a shows an example of Python code snippet and its corresponding AST. The root node is a composite node of type FunctionDef, leaf nodes which are typed as Name are tokens of code snippets. It’s worth mentioning that the tokens from AST parsing may be different from that from word segmentation. In our paper, we consider both of them. We parse the code snippets into abstract syntax trees (ASTs), and then propose an AST-based The limitation of maximum likelihood text generation. To overcome the exposure bias, we draw on the insights of deep reinforcement learning, which integrates exploration and exploitation into a whole framework. Instead of learning a sequential recurrent model to greedily look for the next correct word, we utilize an actor network and a critic network to jointly determine the next best word at each time step. The actor network, which provides the confidence of predicting the next word according to current state, serves as a local guidance. The critic network, which evaluates the reward value of all possible extensions of the current state, serves as a global and lookahead guidance. Our framework is able to include the good words that are with low probability to be drawn by using the actor network alone. To learn these two networks more efficiently, we start with pretraining an actor network using standard supervised learning with cross entropy loss, and pretraining a critic network with mean square loss. Then, we update the actor and critic networks according to the advantage reward composed of BLEU metric via policy gradient. We summarize our main contributions as follows. - We propose a more comprehensive representation method for source code, with one AST-based LSTM for the structure of source code, and another LSTM for the sequential content of source code. Furthermore, a hybrid attention layer is applied to fuse these two representations. - We propose an actor-critic network learning framework, an advanced deep reinforcement learning framework, to cope with the exposure bias issue existing in most traditional maximum likelihood estimation-based text generation framework. - We validate our proposed model on a real-world dataset of 108,726 Python code snippets. Comprehensive experiments show the effectiveness of the proposed model when compared with some state-of-the-art ones. To facilitate other researchers to repeat our experiments, we will release our dataset and source code later. **Organization.** The remainder of this paper is organized as follows. We provide some background knowledge on neural language model, RNN encoder-decoder model and reinforcement learning in Section II for a better understanding of our proposed model. We also give an overview our proposed framework in Section II. Section III presents a hybrid embedding approach for code representation. Section IV shows our proposed deep reinforcement learning framework. Section V describes the dataset used in our experiment and shows the experimental results and analysis. Section VII highlights some works related to this paper. Section VI shows some threats to validity and limitations existing in our model. Finally, we conclude this paper and give some future research directions in Section VIII. ## II. BACKGROUND In this section, we first present some background knowledge on text generation used in this paper. To start with, we introduce some basic notations and terminologies. We consider the problem of learning to produce an output sequence \( y = (y_1, \ldots, y_{|y|}) \), given an input \( x = (x_1, x_2, \ldots, x_{|x|}) \), where \(|·|\) denotes the length of sequence. We will often use notation \( y_{f \ldots i} \) to refer to subsequences of the form \((y_f, \ldots, y_i)\). Two sets of input-output pairs \((x, y)\) are assumed to be available for both training and testing. ### A. Language Model Language models compute the probability of occurrence of a number of words in a particular sequence. The probability of a sequence of \( T \) words \( \{y_1, \ldots, y_T\} \) is denoted as \( p(y_1, \ldots, y_T) \). Since the number of words coming before a word, \( y_i \), varies depending on its location in the input document, \( P(y_1, \ldots, y_T) \) is usually conditioned on a window of \( n \) previous words rather than all previous words: \[ p(y_1, \ldots, y_T) = \prod_{i=1}^{T} p(y_i | y_{1:i-1}) \approx \prod_{i=1}^{T} p(y_i | y_{i-(n-1):i-1}) \] This kind of n-grams approach suffers apparent limitations [14], [15]. For example, the n-gram model probabilities are not derived directly from the frequency counts, because models derived this way have severe problems when confronted with any n-grams that have not explicitly been seen before. Furthermore, since they are simply smoothed counts of term co-occurrences, they are limited in their ability to generalize beyond the explicit features observed in training [16], [17]. The neural language model is a language model based on neural networks. Unlike the n-gram model which predicts a word based on a fixed number of preceding words, a neural language model can predict a word by preceding words with longer distances. Figure 2(a) shows the basic structure of an RNN. The neural network includes three layers, that is, an input layer which maps each word to a vector, a recurrent hidden layer which recurrently computes and updates a hidden state after reading each word, and an output layer which estimates the probabilities of the following word given the current hidden state. The RNN reads the words in the sentence one by one, and predicts the possible following word at each time step. At step \( t \), it estimates the probability of the following word \( p(y_{t+1} | y_{1:t}) \) by three steps: First, the current word \( y_t \) is mapped to a vector \( e_t \) by the input layer: \( e_t = \text{input}(y_t) \). Then, it generates the hidden state \( h_t \) in the hidden layer at time \( t \) according to the previous hidden state \( h_{t-1} \) and the current input \( y_t \): \[ h_t = f(h_{t-1}, e_t) \] Finally, the \( p(y_{t+1} | y_{1:t}) \) is predicted according to the current hidden state \( h_t \): \[ p(y_{t+1} | y_{1:t}) = g(h_t) \] where \( g \) is a stochastic output layer (typically a softmax for discrete outputs) that generates output token. **B. Attentional RNN Encoder-Decoder Model** RNN Encoder-Decoder has two recurrent neural networks (RNNs). The encoder transforms the code snippet \( x \) into a sequence of hidden states \( \{h_1, h_2, \ldots, h_{\infty}\} \) with a Recurrent Neural Network (RNN), while the decoder uses another RNN to generate a word \( y_{t+1} \) at a time in the target space. Generation is conditioned on all previously generated words \( y_{1:t} \) and a dynamically created context vector \( c_t \), which encodes the source sentence: 1) **Encoder:** As an RNN, the encoder has a hidden state \( h_t \) which is a fixed-length vector. At a time step \( t \), the encoder computes the hidden state \( h_t \) by: \[ h_t = f(h_{t-1}, c_{t-1}, e(y_t)) \tag{4} \] Two common options for \( f \) are long short-term memory (LSTM) [42] and the gated recurrent unit (GRU) [35]. The last symbol of \( y \) should be an end-of-sequence (\( \text{eos} \)) symbol which notes the encoder to stop and output the final hidden state \( h_T \), which is used as a vector representation of \( y \). 2) **Decoder:** The output of the decoder is the target sequence \( y = (y_1, \ldots, y_T) \). One input of the decoder is a \( \text{start} \) symbol denoting the beginning of the target sequence. At a time step \( t \), the decoder computes the hidden state \( h_t \) and the conditional distribution \( t \) of the next symbol \( y_t \) by: \[ y_t \sim g(h_{t-1}, c_{t-1}) \tag{5} \] where \( c_t \) is the distinct context vector for \( y_t \), and can be computed by: \[ c_t = \sum_{j=1}^{L} \alpha_{t,j} h_j \tag{6} \] 3) **Training goal:** The encoder and the decoder are jointly trained to maximize the conditional log-likelihood: \[ \max_{\theta} \mathcal{L}(\theta) = \max_{\theta} \frac{1}{N} \sum_{i=1}^{N} \log p(y_i | x_i; \theta), \tag{7} \] where \( \theta \) is the set of the model parameters; \( N \) is the size of the training set; and each \( (x_i, y_i) \) is a pair of a source sequence and a target sequence in the training set. **C. Reinforcement Learning for Better Decoding** We can see that this classical encoder-decoder framework targets on maximizing the likelihood of ground-truth word conditioned on previously generated words. Based on this framework, we propose our reinforcement learning framework since the text generation process can be viewed as a Markov Decision Process (MDP) \( \{S, A, P, R, \gamma\} \). In the MDP setting, state \( s_t \) at time step \( t \) consists of the source code snippets \( x \) and the words/actions predicted until \( t \), \( y_0, y_1, \ldots, y_t \). The action space is the dictionary \( \mathcal{Y} \) that the words are drawn from, i.e., \( y_t \subset \mathcal{Y} \). With the definition of the state, the state transition function \( P \) is \( s_{t+1} = \{s_t, y_{t+1}\} \), where the action \( y_{t+1} \) becomes a part of the next state \( s_{t+1} \) and the reward \( r_{t+1} \) is received. \( \gamma \in [0, 1] \) is the discount factor. The objective of generation process is to find a policy that maximizes the expected reward of generation sentence sampled from the model's policy: \[ \max_{\theta} \mathbb{E}_{\tilde{y} \sim P(\tilde{y} | x, \theta)} [R(\tilde{y}, x)], \] where \( \theta \) is the parameter of policy needed to be learnt, \( \mathcal{D} \) is the training set, \( \tilde{y} \) is the predicted action/word, and \( R \) is the reward function. Our problem can be formulated as follows. Given a code snippet \( x = (x_1, x_2, \ldots, x_{|x|}) \), our goal is to find a policy that generates a sequence of words \( y = (y_1, y_2, \ldots, y_{|y|}) \) from dictionary \( \mathcal{Y} \) with the objective of maximizing the expected reward. \[\text{A. Lexical Level}\] The key insight into lexical level representation of source code is that the comments are always extracted from the lexical of code, mainly from the function name, variable name and so on. It’s apparent that we adopt a LSTM to represent the lexical information of source code. \# TODO: add some sentences. \[\text{B. Syntactic Level}\] Different from previous methods that just utilize sequence words to represent code, we also consider the structure information of source code. The front end of a compiler decomposes a program into constituents and produces intermediate code according to the syntax of the language [19]. These constituents are called programming constructs, and a context-free grammar specifies the syntax of programming constructs [19]. The AST is one type of intermediate code that represents the hierarchical syntactic structure of a program [19]. Ultimately, our goal is to specify learning-based techniques for encoding arbitrarily long sequences of lexical elements. Since the non-terminal nodes in ASTs subsume sequences of lexical elements [19], suppose each AST node has a special attribute \textit{repr} that stores a vector representation, a code that characterizes the node and, by extension, the sequence of lexical elements the node subsumes. We mine the codes in such a way that similar sequences have similar codes. One learning-based technique is based on the AST, a tree representation that can have an arbitrary number of levels comprising nodes with an arbitrary number of children, but herein lies the problem. Similar to a traditional LSTM unit, we propose AST-based LSTM where the LSTM unit also contains an input gate, a memory cell and an output gate. However, different from a standard LSTM unit which only has one forget gate for its previous unit, an AST-based LSTM unit contains multiple forget gates. Given an AST, for any node \( j \), let the hidden state and memory cell of its \( l \)th child be \( h_{jl} \) and \( c_{jl} \) respectively. Refer to [18], the hidden state is updated as follows. \[ i_j = \sigma(W^{(i)}x_j + \sum_{l=1}^{N} U^{(i)}_{jl}h_{jl} + b^{(i)}), \] \[ f_{jk} = \sigma(W^{(f)}x_j + \sum_{l=1}^{N} U^{(f)}_{kl}h_{jl} + b^{(f)}), \] \[ o_j = \sigma(W^{(o)}x_j + \sum_{l=1}^{N} U^{(o)}_{jl}h_{jl} + b^{(o)}), \] \[ u_j = \tanh(W^{(u)}x_j + \sum_{l=1}^{N} U^{(u)}_{jl}h_{jl} + b^{(u)}), \] \[ c_j = i_j \odot u_j + \sum_{l=1}^{N} f_{jl} \odot c_{jl}, \] \[ h_j = o_j \odot \tanh(c_j) \] where \( k = 1, 2, \ldots, N \). Each of \( i_j, f_{jk}, o_j \) and \( u_j \) denotes an input gate, a forget gate, an output gate, and a state for updating the memory cell, respectively. \( W^{(\cdot)} \) and \( U^{(\cdot)} \) are weight matrices, \( b^{(\cdot)} \) is a bias vector, and \( x_j \) is the word embedding of the \( j \)-th node. \( \sigma(\cdot) \) is the logistic function, and the operator \( \odot \) denotes element-wise multiplication between vectors. It’s worth mentioning that when the tree is simply a chain, namely \( N = 1 \), the AST-based LSTM unit reduces to the standard LSTM. Notice that the number of children \( N \) varies for different nodes of different ASTs, which may cause problem in parameter-sharing. For simplification, we transform the generated ASTs to binary trees by the following two steps which have been adopted in [20]: a) split nodes with more than weight matrices, \( \mu^{(\cdot)} \) is the word vector, we concatenate them firstly and then feed them into an vector. It’s worth mentioning that when the tree is simply a one-layer linear network: \[ \text{context} = \text{summarization vector weighted by} \ h \] is calculated as the summarization vector weighted by \( h \). The context vector is then used for the \((t + 1)\)-th word prediction by putting an additional hidden layer \( s_t \): \[ s_t = \tanh(W_c[s_t; d_t] + b_d), \] where \([s_t; d_t]\) is the concatenation of \( s_t \) and \( d_t \). The model predicts the \( t \)-th word by using a softmax function. Let \( p_\pi \) denote a policy \( \pi \) determined by the actor network, \( p_\pi(y_t|s_t) \) denote the probability distribution of generating \( t \)-th word \( y_t \), we can get the following equation: \[ p_\pi(y_t|s_t) = \text{softmax}(W_s s_t + b_s), \] **B. Critic Network** Unlike traditional encoder-decoder framework which generates sequence directly via maximizing the likelihood of next word given the ground truth word, we directly optimize the evaluation metrics such as BLEU [22] for code summarization. We apply a critic network to approximate the value of generated actions at time step \( t \). Different from the actor network, this critic network outputs a single value instead of a probability distribution on each decoding step. Before we introduce our value network, we firstly introduce the value function. Given the policy \( \pi \), sampled actions and reward function, the value function \( V^\pi \) is defined as the prediction of total reward from the state \( s_t \) at step \( t \) under policy \( \pi \), which is formulated as follows: \[ V^\pi(s_t) = \mathbb{E}_{y_{t+1:T} \sim \pi}(\sum_{t=0}^{T-1} r_{t+1} | y_{t+1}, \ldots, y_T, h) \] where \( T \) is the max step of decoding; \( h \) is the representation of code snippet. For code summarization, we can only obtain an evaluation score (BLEU) when the sequence generation process (or episode) is finished. The episode terminates when step exceeds the max-step \( T \) or generating the end-of-sequence (EOS) token. Therefore, we define the reward as follows: \[ r_t = \begin{cases} 0 & t < T \\ \text{BLEU} & t = T \text{ or EOS}. \end{cases} \] Mathematically, the critic network tries to minimize the following loss function, when mean square error is used. \[ \mathcal{L}(\phi) = \frac{1}{2} \| V^\pi(s_t) - V^\phi(s_t) \|^2, \] where \( V^\pi(s_t) \) is the target value, \( V^\phi(s_t) \) is the value predicted by critic network and \( \phi \) is the parameter of critic network. **C. Model Training** We use the policy gradient method to optimize policy directly, which is widely used in reinforcement learning. For actor network, the goal of training is to minimize the negative expected reward, which can be defined as \( \mathcal{L}(\theta) = -\mathbb{E}_{y_{t+1:T} \sim \pi}(\sum_{t=0}^{T-1} r_t) \), where \( \theta \) is the parameter of actor network. Denote all the parameters as $\Theta = \{\theta, \phi\}$, the total loss of our model can be represented as $L(\Theta) = L(\theta) + L(\phi)$. For policy gradient, it is typically better to train an expression of the following form according to [23]: $$\nabla_\theta L(\Theta) = \mathbb{E} \sum_{t=0}^{T-1} A^\pi(s_t, y_{t+1}) \nabla_\theta \log \pi(y_{t+1} | s_t),$$ (20) where $A^\pi(s_t, y_{t+1})$ is advantage function. The reason why we choose advantage function is that it achieves smaller variance when compared with some other ones such as TD framework. At the $t$-th step, the parameters $\Theta$ are updated by: $$\Theta \leftarrow \Theta - \frac{\rho}{\sqrt{\sum_{i=1}^{t} y_i^2}} y_t,$$ (23) where $\rho$ is the initial learning rate and $y_t$ is the subgradient at time $t$. Algorithm 1 summarizes our proposed model described above. **Algorithm 1** Actor-Critic training for code summarization. 1: Initialize actor $\pi_{y_{t+1} | s_t}$ and critic $V^\pi(s_t)$ with random weights $\theta$ and $\phi$; 2: Pre-train the actor to minimize $\mathbb{E} \sum_{t=0}^{T-1} [V^\pi(s_t) - V^\pi(s_t)]$ according to current policy $\pi^\theta$; 3: Pre-train the critic to estimate $V^\pi(s_t)$ by with fixed actor; 4: for $t = 1 \rightarrow T$ do 5: Receive a random example, and generate sequence of actions $\{y_1, \ldots, y_T\}$ according to current policy $\pi^\theta$; 6: Calculate advantage estimate $A^\pi$ according to Eq. 21; 7: Update critic weights $\phi$ using the gradient in Eq. 22; 8: Update actor weights $\theta$ using the gradient in Eq. 20. V. EXPERIMENTS AND ANALYSIS To evaluate our proposed approach, in this section, we conduct experiments to answer the following questions: - **RQ1.** Does our proposed approach improve the performance of code summarization when compared with some state-of-the-art approaches? - **RQ2.** What’s the effectiveness of each component for our proposed model? For example, what about the performance of hybrid code representation and the that of reinforcement learning? - **RQ3.** What’s the performance of our proposed model on the datasets with different code or comment length? We ask RQ1 to evaluate our deep reinforcement learning-based model compared to some state-of-the-art baselines, which we describe in the following subsection. We ask RQ2 in order to evaluate each component of our model. We ask RQ3 to evaluate our model when varying the length of code or comment. In the following subsections, we rst describe the dataset, some evaluation metrics and the training details. Then, we introduce the baseline for RQ1. Finally, we report our results and analysis for the research questions. A. Dataset Preparation We evaluate the performance of method using the dataset in [26], which is obtained from a popular open source projects hosting platform, GitHub. The dataset contains 108,726 code-comment pairs. The vocabulary size of code and comment is 50,400 and 31,350, respectively. For cross-validation, we shuffle the dataset firstly and use the first 60% for training, 20% for validation and the left for testing. To construct the tree-structure of code, we parse Python code into abstract syntax trees via ast\lib. To convert code into sequential text, we tokenize the code by \{`"'":;)(!?(space)\}, which has been used in [8]. We tokenize the comment by \{(space)\}. Figure 5 shows the length distribution of code and comment, respectively. From Figure 5a, we can find that the length of most code snippets are located between 20 to 60. This verify the quote in [27] that “Functions should hardly ever be 20 lines long”. In Python language, the limited length should be shorter. From Figure 5b, we can notice that the length of nearly all the comments are between 5 to 15. This reveals the comment sequence that we need to generate will not be too long. B. Evaluation metrics We evaluate the performance of our proposed model based on four widely-used evaluation criteria in the area of neural machine translation and image captioning, i.e., BLEU [22] METEOR [28], ROUGE-L[29] and CIDER[30]. BLEU measures the average n-gram precision on a set of reference sentences, with a penalty for overly short sentences. METEOR is recall-oriented and measures how well our model captures content from the references in our output. ROUGE-L takes into account sentence level structure similarity naturally and identifies longest co-occurring in sequence n-grams automatically. CIDER is a consensus based evaluation protocol for image descriptions. To make the paper be compacted, we put the formulation of each metric in Table IV (see Appendix A). C. Training Details The hidden size of the encoder and decoder LSTM networks are both set to be 512, and the word embedding size is set to be 512. The mini-batch size is set to be 64, while the learning rate is set to be 0.001. We pretrain both actor network and critic network with 10 epochs each, and train the actor-critic network simultaneously 10 epochs. We record the perplexity\textsuperscript{2}/reward every 50 iterations. Figure 6 shows the perplexity and reward curves of our method. All the experiments in this paper are implemented with Python 2.7, and run on a computer with an 2.2 GHz Intel Core i7 CPU, 64 GB 1600 MHz DDR3 RAM, and a Titan X GPU with 12 GB memory, running Ubuntu 16.04. D. RQ1: Compared to Baselines We compare our model with the following baseline methods: - Seq2Seq [31] is a classical encoder-decoder framework in neural machine translation, which encodes the source sentences into a hidden space, and decodes it into target ones. In our comparison, the encoder and decoder are both based on LSTM. - Seq2Seq+Attn [21] is a derived version of Seq2Seq model with an attentional layer for sentence alignment. - Tree2Seq [20] follows the same architecture with Seq2Seq and applies Tree-LSTM as encoder for the task of code clone detection. - Tree2Seq+Attn [32] is a derived version of Tree2Seq model with an attentional layer, which has been applied in neural machine translation - Hybrid2Seq(+Attn+A2C) represents three versions of our proposed model with/without Attn/A2C component. Table I shows the experimental results of comparison between our proposed model and some previous ones. From this table, we can find that our proposed model outperforms other baselines in almost all of evaluation metrics. When comparing Seq2Seq/Tree2Seq with its correspond attention-based version, we can see that attention is really effective in aligning the code tokens with comment tokens. We can also find that the performance of simply encoding the tree structure of code is worse than that of simply encoding the code as sequence. This can be illustrated by that the words of comments are always drawn from the tokens of code. Thus, our model which considers both the structure and sequential information of code achieves the best performance in this comparison. \textsuperscript{2}Perplexity is a function of cross entropy loss, which has been widely used in evaluation of many natural language processing tasks. Table I: Comparison of the overall performance between our model and previous methods. (Best scores are in boldface.) <table> <thead> <tr> <th>Model</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE-L</th> <th>CIDER</th> </tr> </thead> <tbody> <tr> <td>Seq2Seq</td> <td>0.1660</td> <td>0.0251</td> <td>0.0100</td> <td>0.0056</td> <td>0.0555</td> <td>0.2838</td> <td>0.1262</td> </tr> <tr> <td>Seq2Seq+Attn</td> <td>0.1897</td> <td>0.0419</td> <td>0.0200</td> <td>0.0133</td> <td>0.0640</td> <td>0.3083</td> <td>0.2594</td> </tr> <tr> <td>Tree2Seq</td> <td>0.1649</td> <td>0.0236</td> <td>0.0096</td> <td>0.0053</td> <td>0.0501</td> <td>0.2794</td> <td>0.1168</td> </tr> <tr> <td>Tree2Seq+Attn</td> <td>0.1887</td> <td>0.0417</td> <td>0.0197</td> <td>0.0129</td> <td>0.0644</td> <td>0.3086</td> <td>0.2331</td> </tr> <tr> <td>Hybrid2Seq+Attn</td> <td>0.2527</td> <td>0.1033</td> <td>0.0640</td> <td>0.0441</td> <td>0.0929</td> <td>0.3913</td> <td>0.7501</td> </tr> </tbody> </table> Table II: Effectiveness of each component for our proposed model. (Best scores are in boldface.) <table> <thead> <tr> <th>Model</th> <th>BLEU-1</th> <th>BLEU-2</th> <th>BLEU-3</th> <th>BLEU-4</th> <th>METEOR</th> <th>ROUGE-L</th> <th>CIDER</th> </tr> </thead> <tbody> <tr> <td>Seq2Seq+Attn+A2C</td> <td>0.2421</td> <td>0.0919</td> <td>0.0513</td> <td>0.0325</td> <td>0.0882</td> <td>0.3935</td> <td>0.6390</td> </tr> <tr> <td>Tree2Seq+Attn+A2C</td> <td>0.2309</td> <td>0.0854</td> <td>0.0499</td> <td>0.0338</td> <td>0.0843</td> <td>0.3767</td> <td>0.6060</td> </tr> <tr> <td>Hybrid2Seq</td> <td>0.1837</td> <td>0.0379</td> <td>0.0183</td> <td>0.0122</td> <td>0.0604</td> <td>0.3020</td> <td>0.2223</td> </tr> <tr> <td>Hybrid2Seq+Attn</td> <td>0.1965</td> <td>0.0516</td> <td>0.0280</td> <td>0.0189</td> <td>0.0693</td> <td>0.3154</td> <td>0.3475</td> </tr> <tr> <td>Hybrid2Seq+Attn+A2C (Our)</td> <td>0.2527</td> <td>0.1033</td> <td>0.0640</td> <td>0.0441</td> <td>0.0929</td> <td>0.3913</td> <td>0.7501</td> </tr> </tbody> </table> E. RQ2: Performance of Hybrid Code Representation and Reinforcement Learning Table II shows the effectiveness of some main components in our proposed model. From this table, comparing the results of Seq2Seq+Attn/Tree2Seq+Attn with and without (Table I) the advantage actor-critic (A2C) network, we can see that the proposed A2C component can really boost the performance of comment generation for source code. We can also find the proposed approach of integrating the LSTM for content and AST-based LSTM for structure is effective on representing the code as compared with the corresponding non-hybrid ones in Table I. Furthermore, it also verifies that our proposed hybrid attention mechanism works well in our model. F. RQ4: Parameter Analysis Figure 7 and Figure 8 show the performance of our proposed method when compared with two baselines on datasets of varying code length and comment lengths, respectively. From Figure 7, we can see that our model performs best when compared with other baselines on four metrics with respect to different code length. Additionally, we can see that the proposed model has a stable performance even though the code length increase dramatically. We attribute this effect to the hybrid representation we adopt in our model. From Figure 8, reminder that comment length distribution in Figure 5b. Since nearly all the comment length of testing data are under 20, we ignore the performance analysis over the data whose comment length are larger than 20. From this figure, we can see the performances our model and baselines vary dramatically on four metrics with respect to different comment length. G. Qualitative Analysis and Visualization We show two examples in Table III. It’s clear that the generated comments by our model is closest to the ground truth. Although those models without A2C can generate some tokens which are also in the ground truth, they can’t predict those tokens which are not frequently appeared in the training data. On the contrary, our actor-critic learning based model can generate some tokens which are more closer to the ground truth, like git, symbolic. This can be illustrated by the fact that our model has a more comprehensive exploration on the word space and optimizes the BLEU score directly. In Table III, we also visualize two attentions in our proposed model for the target sentences. For example, for Case 1 with target sentence `check if git is installed`, we can notice that the `str-attn` (left of figure) focuses more on tokens like `OSError`, `False`, `git.version`, which represent the structure of code. On the other hand, the attention of `txt-attn` (right of figure) is comparatively dispersed, and have a focus on some tokens like `def`, which is of little significance for code summarization. This verifies our assumption that LSTM can capture the sequential content of code, and AST-based LSTM can capture the structure information of code. Thus, it’s reasonable to fuse them together for a better representation. VI. THREATS TO VALIDITY AND LIMITATIONS One threat to validity is that our approach is experimented only on Python code collected from GitHub, so they may not be representative of all the comments. However, Python is a popular programming language used in a large number of projects. In the future, we will extend our approach to other programming languages. Another threat to validity is on the metrics we choose for evaluation. It has always been a tough challenge to evaluate the similarity between two sentences for the tasks such as neural machine translation, image captioning [1]. In this paper, we only adopt four popular metrics, it is necessary for us to evaluate the performance of generated text from more perspectives, such as human evaluation. Furthermore, in the deep reinforcement learning perspective, we only set the BLEU score of generated sentence as the reward. Its well known that for a reinforcement learning framework, one of the biggest challenge is how to design a reward function to measure the value of action correctly, and it is still an open problem. In our future work, we plan to devise a reward function that can reflect the value of each action more correctly. VII. RELATED WORK In this section, we briefly review some related work from the aspects of deep learning on code analysis, source code summarization and deep reinforcement learning. A. Deep Learning on Code Analysis With the successful development of deep learning, it has also become more and more prevalent for representing source code in the domain of software engineering research. Gu et al. [33] use a sequence-to-sequence deep neural network [31], originally introduced for SMT, to learn intermediate distributed vector representations of natural language queries. which they use to predict relevant API sequences. Mou et al. [34] learn distributed vector representations using custom convolutional neural networks to represent features of snippets of code, then they assume that student solutions to various coursework problems have been intermixed and seek to recover the solution-to-problem mapping via classification. Li et al. [35] exploit heap structure to define graph neural networks, a new machine learning model based on GRUs (a type of RNN) to directly learn from heap graphs. Piech et al. [36] and Parisotto et al. [37] learn distributed representations of source code input/output pairs and use them to assess and review student assignments or to guide program synthesis from examples. Neural code-generative models of code also use distributed representations to capture context, a common practice in NLP. For example, the work of Maddison and Tarlow [38] and other neural language models (e.g. LSTMs in Dan et al. [39]) describe context distributed representations while sequentially generating code. Ling et al. [40] and Allamanis et al. [41] combine the code-context distributed representation with a distributed representations of other modalities (e.g. natural language) to synthesize code. While all of these representations can, in principle, encode unbounded context, handling all code dependencies of arbitrary length is an unsolved problem. Some neural architectures, such as LSTMs [42], GRUs [43] and their variants, have made progress on this problem and handle moderately long-range dependencies. B. Source Code Summarization Code summarization is a novel task in the area of software engineer, which has drawn great attention in recent years. The existing work for the problem of code summarization can be mainly categorized as rule based approaches [10], statistical language model based approaches [4] and deep learning based approaches [11], [5], [12]. Sridhara et al. [10] construct a software word usage model first, and generate comment according to the tokenized function/variable names via rules. Movshovitz-Attias et al. [4] predict comments from JAVA source files of open source projects using topic models and n-grams. In [11], the authors introduce an attentional neural network that employs convolution on the input tokens to detect local time-invariant and long-range topical attention features to summarize source code snippets into short, descriptive function name-like summaries. Iyer et al. [5] propose to use LSTM networks with attention to produce sentences that describe C# code snippets and SQL queries. In Haije’s thesis [12], the code summarization problem is modeled as a translation task, and some classical translation models such as Seq2Seq [31] and Seq2Seq with attention [21] are employed. Unlike previous study, we take the tree structure and sequential content of source code into consideration for a better representation of code. C. Deep Reinforcement Learning Reinforcement learning [44], [45], [46], known as “a machine learning technique concerning how software agent ought to take actions in an environment so as to maximize some notion of cumulative reward”, is well suited for the task of decision-making. Recently, professional-level computer Go program has been designed by Silver et al. [47] using deep neural networks and Monte Carlo Tree Search. Human-level gaming control [48] has been achieved through deep Q-learning. A visual navigation system [49] has been proposed recently based on actor-critic reinforcement learning model. Recently, with the success of AlphaGo[47], deep reinforcement learning has shown great potential for the task of decision-making through exploitation and exploration. Text generation can also be formulated as a decision-making problem and there have been several reinforcement learning-based works on this specific tasks, including image captioning [50], dialogue generation [51] and sentence simplification [52]. Ren et al. [50] propose an actor-critic deep reinforcement learning model with an embedding reward for image captioning. Li et al. [51] integrate a developer-defined reward with REINFORCE algorithm for dialogue generation. In this paper, we follow an actor-critic reinforcement learning framework, while our focus is on encoding the structural and sequential information of code snippets simultaneously with an attention mechnism. VIII. CONCLUSION AND FUTURE WORK In this paper, we propose a tree-structured actor-critic learning model to generate summarization for code snippets. Specifically, we first encode the structure and sequential content of code via AST-based LSTM and LSTM respectively. Then we add a hybrid attention layer to integrate them. We then feed the code representation vector into an actor-critic framework. Comprehensive experiments on a real-world dataset show that our proposed model outperforms other competitive baselines and achieves state-of-the-art performance on automatic metrics, namely BLEU, METEOR, ROUGE-L and CIDER. In our future work, we plan to design a copy mechanism to cope with rare words which are out of our vocabulary, and extend our experiments to other programming languages such as Java. **APPENDIX** **A. Evaluation Metrics** - $a$: candidate sentence, $b$: set of reference sentences, $w_n$: n-gram $c_x(y_n)$: count of n-gram $y_n$ in sentence $x$. ### Table IV: Formulation of different metrics. <table> <thead> <tr> <th>Metrics</th> <th>Formulation</th> <th>Remarks</th> </tr> </thead> <tbody> <tr> <td><strong>BLEU</strong></td> <td>( p_n = \sum_{w_n \in a} \min \left( c_a(w_n), \frac{\max}{\sum_{w_n \in a} c_a(w_n)} \right) )</td> <td>( r ) is the reference sentence length, ( c ) the length of the candidate sentence, ( p_n ) is the n-gram precision of n-grams up to ( N ), ( \alpha_n ) is positive weight for each gram.</td> </tr> <tr> <td></td> <td>( BP = \begin{cases} 1 &amp; \text{if } c &gt; r \ 1 - \frac{1}{r} &amp; \text{if } c \leq r \end{cases} )</td> <td></td> </tr> <tr> <td></td> <td>( BLEU = BP \times \exp \left( \sum_{n=1}^{N} \alpha_n \log p_n \right) )</td> <td></td> </tr> <tr> <td><strong>METEOR</strong></td> <td>( METEOR = \max_{j=1, \cdots,</td> <td>b</td> </tr> <tr> <td><strong>ROUGE</strong></td> <td>( ROUGE = \sum_{j=1}^{</td> <td>b</td> </tr> <tr> <td></td> <td>( \frac{1}{\sum_{j=1}^{</td> <td>b</td> </tr> <tr> <td><strong>CIDER</strong></td> <td>( CIDER = \sum_{n=1}^{</td> <td>b</td> </tr> </tbody> </table> ### Table V: More training details for our model. <table> <thead> <tr> <th>Parameter</th> <th>Value</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>layers</td> <td>1</td> <td>Number of layers in the LSTM encoder/decoder</td> </tr> <tr> <td>rnn_size</td> <td>512</td> <td>Size of LSTM hidden states</td> </tr> <tr> <td>input_feed</td> <td>1</td> <td>If value is 1, feed the context vector at each time step as additional input (via concatenation with the word embeddings) to the decoder.</td> </tr> <tr> <td>batch_size</td> <td>64</td> <td>Batch size</td> </tr> <tr> <td>optim</td> <td>AdaGrad</td> <td>Optimization method. [sgd—adagrad—adadelta—adam]</td> </tr> <tr> <td>lr</td> <td>1e-3</td> <td>Initial learning rate</td> </tr> <tr> <td>learning_rate_decay</td> <td>0.5</td> <td>If update_learning_rate, decay learning rate by this much if (i) perplexity does not decrease on the validation set or (ii) epoch has gone past start_decay_at</td> </tr> <tr> <td>start_decay_at</td> <td>5</td> <td>Start decaying every epoch after and including this epoch</td> </tr> <tr> <td>dropout</td> <td>0.3</td> <td>Dropout probability; applied between LSTM stacks.</td> </tr> </tbody> </table>
{"Source-Url": "https://opus.lib.uts.edu.au/bitstream/10453/126042/4/OCC-121665_AM.pdf", "len_cl100k_base": 10880, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 52633, "total-output-tokens": 11954, "length": "2e13", "weborganizer": {"__label__adult": 0.0004892349243164062, "__label__art_design": 0.0003917217254638672, "__label__crime_law": 0.0003235340118408203, "__label__education_jobs": 0.0004773139953613281, "__label__entertainment": 8.052587509155273e-05, "__label__fashion_beauty": 0.0001856088638305664, "__label__finance_business": 0.00016486644744873047, "__label__food_dining": 0.00035381317138671875, "__label__games": 0.0006437301635742188, "__label__hardware": 0.0009083747863769532, "__label__health": 0.0004787445068359375, "__label__history": 0.0001609325408935547, "__label__home_hobbies": 9.882450103759766e-05, "__label__industrial": 0.0003445148468017578, "__label__literature": 0.0002644062042236328, "__label__politics": 0.00021159648895263672, "__label__religion": 0.000392913818359375, "__label__science_tech": 0.01279449462890625, "__label__social_life": 7.790327072143555e-05, "__label__software": 0.004634857177734375, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0003256797790527344, "__label__transportation": 0.0005092620849609375, "__label__travel": 0.0001995563507080078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43910, 0.02903]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43910, 0.49493]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43910, 0.88592]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5840, false], [5840, 9632, null], [9632, 14340, null], [14340, 18187, null], [18187, 21873, null], [21873, 24370, null], [24370, 28930, null], [28930, 34902, null], [34902, 35292, null], [35292, 40491, null], [40491, 40491, null], [40491, 41644, null], [41644, 43910, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5840, true], [5840, 9632, null], [9632, 14340, null], [14340, 18187, null], [18187, 21873, null], [21873, 24370, null], [24370, 28930, null], [28930, 34902, null], [34902, 35292, null], [35292, 40491, null], [40491, 40491, null], [40491, 41644, null], [41644, 43910, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43910, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43910, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5840, 2], [5840, 9632, 3], [9632, 14340, 4], [14340, 18187, 5], [18187, 21873, 6], [21873, 24370, 7], [24370, 28930, 8], [28930, 34902, 9], [34902, 35292, 10], [35292, 40491, 11], [40491, 40491, 12], [40491, 41644, 13], [41644, 43910, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43910, 0.15044]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
a24253332a72a9ee8ad2172cd6b5178cd33c23ea
Abstract We present a new mechanism, warranties, to enable building distributed systems with linearizable transactions. A warranty is a time-limited assertion about one or more distributed objects. These assertions generalize optimistic concurrency control, improving throughput because clients holding warranties need not communicate to verify the warranty’s assertion. Updates that might cause an active warranty to become false are delayed until the warranty expires, trading write latency for read latency. For workloads biased toward reads, warranties improve scalability and system throughput. Warranties can be expressed using language-level computations, and they integrate harmoniously into the programming model as a form of memoization. Experiments with some non-trivial programs demonstrate that warranties enable high performance despite the simple programming model. 1 Introduction Although the trend for many systems has been to weaken consistency in order to achieve greater scalability, strong consistency is critical when lives or money are at stake. Examples include systems for medical information, banking, payment processing, and the military. Users of weakly consistent systems may be confused by applications that appear buggy. Moreover, weak consistency can significantly complicate the job of developers who try to detect and repair inconsistencies at the application layer. Consistency failures at the bottom of a software stack can percolate up through the stack and affect higher layers in unpredictable ways, requiring defensive programming. The need for strong consistency and a simple programming model has kept databases with ACID transactions in business. However, transactions are often considered to have poor performance, especially in a distributed setting. In this work, we introduce warranties, a new mechanism that improves the performance of transactions, enabling them to scale better both with the number of application clients and with the number of persistent storage nodes. Warranties help avoid the unfortunate choice between consistency and performance. 2 Background and system model We assume a distributed system in which each node serves one of two main roles: client nodes perform computations locally using persistent data from elsewhere, and persistent storage nodes (stores) store the persistent data. Client nodes obtain copies of persistent data from stores, perform computations, and send updates to the persistent data back to the stores. For example, the lower two tiers of the traditional three-tier web application match this description: application servers are the clients and database servers are the stores. Our goal is a simple programming model for application programmers, offering strong consistency so they do not need to reason about inconsistent or out-of-date state. In particular, we want linearity [25], so each committed transaction acts as though it executes atomically and in logical isolation from the rest of the system. Linearity strengthens serializability [42, 8] to offer external consistency. A partially successful attempt at such a programming model is the Java Persistence API (JPA) [12], which provides an object–relational mapping (ORM) that translates accesses to language-level objects into accesses to underlying database rows. JPA implementations such as Hibernate [27] and EclipseLink [15] are widely used to build web applications. However, we want to improve on both the consistency and performance of JPA. We assume that the working set of both clients and stores fits in the node’s memory. This assumption is reasonable for many applications, though not for large-scale data analytics applications, which we do not target. In a distributed transaction system using OCC (e.g., Thor [37]) clients fetch and then cache persistent objects across transactions. Optimistic caching allows client transactions to largely avoid talking to stores until commit time, unlike with pessimistic locking. The system is faster because persistent data is replicated at the memories of potentially many client nodes. However, care must be taken to avoid inconsistency among the cached copies. Because of its performance advantages, optimism has become increasingly popular for JPA applications, where the best performance is usually achieved through an “optimistic locking” mode that appears to provide strong consistency in some but not all implementations of JPA.¹ To provide strong consistency, OCC logs reads and writes to objects. As part of committing the transaction, clients send the transaction log to stores involved in the transaction. The stores then check that the state of each object read matches that in the store (typically by checking version numbers), and then perform updates. To scale up a distributed computing system of this sort, it is important to be able to add storage nodes across which persistent data and client requests can be distributed. As long as a given client transaction accesses data at just one store, and load is balanced across the stores, the system scales well: each transaction can be committed with just one round trip between the client and the accessed store. In general, however, transactions access information located at multiple stores. For example, consider a web shopping application. A transaction that updates the user’s shopping cart may still need to read information shared among many users of the system, such as details of the item purchased. Accessing multiple stores hurts scalability. To commit a transaction serializable, it must be known at commit time that all objects read during the transaction were up to date. A two-phase commit (2PC) is used to ensure this is the case. In the first phase (the prepare phase), each store checks that the transaction can be committed and if so,-ready the updates to be committed; it then reports to the coordinator whether the transaction is serializable. If the transaction can be committed at every store, all stores are told to commit in the commit phase. Otherwise, the transaction is aborted and its effects are rolled back. If popular, persistent data is accessed by many clients, the read contention between clients interferes with scalability. Each client committing a transaction must execute a prepare phase at the store of that data. The work done by the prepare phase consists of write prepares done on objects that have been updated by the transaction, and read prepares on objects that have been read. In both cases, the object is checked to ensure that the version used was up to date. Read prepares can make the nodes storing popular objects into bottlenecks even when those objects are rarely updated. This is a fundamental limit on scalability of OCC, so a key benefit of warranties is addressing this performance bottleneck. An alternative strategy would be to replicate popular objects across multiple nodes, but keeping replicas in agreement is very costly. 3 The warranty abstraction A warranty is a time-limited assertion about the state of the system: it is guaranteed to remain true for some fixed period of time. Warranties improve scalability for two reasons: first, because they reduce or eliminate the work needed for read prepares; second, more generally, they enable the distributed caching of computations and enforce a more semantic notion of consistency. Because warranties make guarantees about the state of the system, they allow transactions to be committed without preparing reads against the objects covered by ¹The term “optimistic locking” is misleading; locking occurs only during transaction commit. The JPA 2 specification appears to guarantee that objects written by a transaction are up to date—but, unfortunately, not the objects read unless explicitly locked. Implementations differ in interpretation. warranties. When all reads to a store involved in a transaction are covered by warranties, that store need not be contacted. Consequently, two-phase commit can be reduced to a one-phase commit in which the prepare and commit phases are consolidated, or even to a zero-phase commit in which no store need be contacted. The result is significantly improved performance and scalability. In this section, we give a more detailed overview of how warranties work. - Simple state warranties generalize OCC (§3.1) and also, to some extent, leases (§3.2). - Updates to the system are prevented from invalidating warranties (§3.3), with implications for performance (§3.4). - Warranty assertions can be expressive, enabling distributed caching of computed results (§3.5). - Warranties are requested by clients (§3.6) and generated on demand by stores (§3.7). - Warranties are distributed throughout the system to clients that need them (§3.9). - The term of warranties can be set automatically, based on run-time measurements (§3.8). 3.1 State warranties The simplest form of warranty is a state warranty, an assertion that the concrete state of an object has a particular value. A warranty is guaranteed to be true (active) during the warranty’s term. At the end of its term, the warranty expires and is no longer guaranteed to be true. For example, a state warranty for an object representing a bank account might be \(\text{assert} = \{\text{name} = "John Doe", \text{bal} = 20,345\}, \text{exp} = 1364412767.1\). Here, the field assert specifies the state of the object, and the field exp is the time that the warranty expires. A warranty is issued by a store, and times appearing in the warranties are measured by the clock of the store that issued the warranty. We assume that clocks at nodes are loosely synchronized; well-known methods exist to accomplish this [40]. If a warranty expires before the transaction commits, the warranty may continue to be valid, meaning that the assertion it contains is still true even though clients cannot rely on its remaining true. Clients can, however, still use the warranty optimistically and check at commit time that the warranty remains valid. As can be seen, state warranties generalize optimistic concurrency control. Ordinary OCC equates to always receiving a zero-length warranty for the state of the object read, and using that expired warranty optimistically. 3.2 Warranties vs. leases Leases [21] have been used in many systems (e.g., [51, 2]) to improve performance. Warranties exploit the key insight of leases that time-limited guarantees increase scalability by reducing coordination overhead. As defined originally by Gray and Cheriton, leases confer time-limited rights to access objects in certain ways, and must be held by clients in order to perform the corresponding access. Conversely, warranties are time-limited assertions about what is true in the distributed system, and are not, therefore, held by any particular set of nodes. Unlike with leases, an expired warranty may be used to access an object optimistically. Gray does sketch in his dissertation [20] how read leases might be integrated into an optimistic transaction processing system, but we are not aware of any detailed design or implementation. Leases and warranties do partly overlap. Since read leases on objects effectively prevent modifying object state, they must enforce assertions regarding the state of that data. Therefore, state warranties can be viewed as read leases that are given to many clients and that cannot be relinquished by those clients. However, we see a fundamental difference between these two perspectives. The value of the warranty (assertion) perspective is that state warranties naturally generalize to expressive assertions over state—in particular, warranties that specify the results of application-defined computations over the state of potentially many objects. 3.3 Defending warranties Transactions may try to perform updates that affect objects on which active warranties have been issued. Updates cannot invalidate active warranties without potentially violating transactional isolation for clients using those warranties. Therefore, stores must defend warranties against invalidating updates, a process that has no analogue in OCC. A warranty can be defended against an invalidating update transaction in two ways: the transaction can either be rejected or delayed. If rejected, the transaction will abort and the client must retry it. If delayed, the updating transaction waits until it can be safely serialized. Rejecting the transaction does not solve the underlying problem of warranty invalidation, so delaying is typically the better strategy if the goal is to commit the update. To prevent write starvation, the store stops issuing new warranties until after the commit. The update also shortens the term of subsequent warranties. 3.4 Performance tradeoffs Using warranties improves read performance for objects on which warranties are issued, but delays writes to these objects. Such a tradeoff appears to be an unavoidable with strong consistency. For example, in conventional database systems that use pessimistic locking to enforce consistency, readers are guaranteed to observe consistent states, but update transactions must wait until all read transactions have completed and released their locks. With many simultaneous readers, writers can be significantly delayed. Thus, warranties occupy a middle ground between optimism and pessimism, using time as a way to reduce the coordination overhead incurred with locking. The key to good performance, then, is to issue warranties that are long enough to allow readers to avoid revalidation but not so long that they block writers more than they otherwise would be blocked. For applications where it is crucial to have both high write throughput and high read throughput to the same object, replication is essential, and the cost of keeping object replicas in sync makes strong consistency infeasible. However, if weak consistency is acceptable, there is a simple workaround: implement replication by explicitly maintaining the state in multiple objects. Writes can go to one or more persistent objects that are read infrequently, and only by a process that periodically copies them (possibly after reconciliation of divergent states) to a frequently read object on which warranties can be issued. This is a much easier programming task than starting from weak consistency and trying to implement strong consistency where it is needed. The only challenging part is reconciliation of divergent replicas, which is typically needed in weakly consistent systems in any case (e.g., [50, 47, 14]). ### 3.5 Computation warranties Warranty assertions are not limited to specifying the concrete state of persistent objects. In general, a warranty assertion is an expression in a language that can describe a computation that operates on persistent objects and that can be evaluated at the store. SQL is one query language that fits this description, but in this work, we integrate assertions more tightly with the programming language. Computation warranties provide guarantees about computations described in terms of method calls. In current distributed applications, it is common to use a distributed cache such as memcached [18] to share data and computation across many nodes. For example, web application servers can cache the text of commonly used web pages or content to be included in web pages. Computation warranties can be used to cache such computed results without abandoning strong consistency. **Example: top N items.** Many web applications display the top-ranked $N$ items among some large set (such as advertisements, product choices, search results, poll candidates, or game ladder rankings). Although the importance of having consistent rankings may vary across applications, there are at least some cases in which the right ranking is important and may have monetary or social impact. Election outcomes matter, product rankings can have a large impact on how money is spent, and game players care about ladder rankings. But at present there is no easy and efficient way to ensure that cached computation results are up to date. To cache the results of such a computation, we might define a computation $\text{top}(n, i, j)$, which returns the set $s$ of the $n$ top-ranked items whose indices in an array of items lie between $i$ and $j$. A warranty of the form $s = \text{top}(n, 0, \text{num} \text{items})$ then allows clients to share the computation of the top-ranked items within the range. The reason why the $\text{top}$ function has arguments $i$ and $j$ is to permit $\text{top}$ to be implemented recursively and efficiently using results from subranges, on which further warranties are issued. We discuss later in more detail how this approach allows computation warranties to be updated and recomputed efficiently. **Example: airplane seats.** Checking whether airplane flights have open seats offers a second example of a computation that can be worth caching. Because the client-side viewer may be sorting lists of perhaps hundreds of potential flights, flights are viewed much more often than their seating is updated. Scalability of the system would be hurt by read prepares. Efficient searching over suitable flights can be supported by issuing warranties guaranteeing that at least a certain number of seats of a specified type are available; for a suitable constant number of seats $n$ large enough to make the purchase, a warranty of this form works: $$\text{flight.seats.available}(\text{type}) \geq n$$ This warranty helps searching efficiently over the set of flights on which a ticket might be purchased. It does not help with the actual update when a ticket is purchased on a flight. In this case, it becomes necessary to find and update the actual number of seats available. However, this update can be done quickly as long as the update does not invalidate the warranty. Like state warranties, computation warranties can be used optimistically even if they expire during the transaction. In this case, the dependencies of the computation described in the warranty must be checked at commit time to ensure that the warranty’s assertion remains true, just as objects whose state warranties expire before commit time must be checked. A warranty that is revalidated in this fashion can then be issued as a new warranty. Like active state warrants, active computation warranties must be defended against invalidation by updates. This mechanism is discussed in Section 5.2. ### 3.6 Programming with warranties As clients compute, they request warranties as needed. State warranties are requested automatically when objects are newly fetched by a computation. Computation warranties can also be generated in a natural way, relying on simple program annotations. Computation warranties explicitly take the form of logical assertions, so they could be requested by using a template for the desired logical assertion. In the airline seat reservation example above, a query of the form `flight.seats_available(type) ≥ n` could be used to find all available warranties matching the query, and at the same time fill in the “?” with the actual value `n` found in the warranty. In the case where multiple warranties match, a warranty might be chosen whose duration and value of `n` are “best” according to application-specific criteria. We pursue a more transparent way to integrate warranty queries into the language, via memoized function calls. For example, we can define a memoized method with the signature `memoized boolean seats_lb(type, n)` that returns whether there are at least `n` seats of the desired type still available on the flight. The keyword `memoized` indicates that its result is to be memoized and warranties are to be issued on its result. To use these warranties, client code uses the memoized method as if it were an ordinary method, as in the following code: ```java for (Flight f : flights) if (f.seats_lb(aisle, seats_needed)) display_flights.add(f); ``` When client code performs a call to a memoized method, the client automatically checks to see if a warranty for the assertion `? = seats_lb(type, n)` has either been received already or can be obtained. If so, the result of the method call is taken directly from the warranty. If no warranty can be found for the method call, the client executes the method directly. With appropriate language support, the implementation of such a memoized method is also straightforward: ```java memoized boolean seats_lb(Seat t, int n) { return seats_available(t) >= n; } ``` A language that correctly supports transparent OCC already automatically logs the reads and writes performed on objects; this logging already computes the dependencies of computation warranties. ### 3.7 Generating warranties Warranties are issued by stores, because stores must know about warranties in order to defend them against updates that might invalidate them. However, for scalability, it is important to avoid giving the store extra load. Therefore, it only makes sense to generate warranties for some objects and computations: those that are used much more frequently than they are invalidated. For state warranties, the store already has enough information to decide when to generate a warranty for an object, because it sees both when the object is updated and when it is necessary to check that the version of the object read by a client is up to date. State warranties improve performance by removing the need to do version checks on read objects, but at the cost of delaying updates that would invalidate active warranties. This tradeoff makes sense if the version checks are sufficiently more numerous than the updates. For computation warranties, the store may be able to infer what warranties are needed from client requests, but it makes more sense to have the client do the computational work. Recall that clients that fail to find a suitable warranty compute the warranty assertion themselves. If the assertion is true, it is the basis of a potential warranty that is stored in the client’s local cache and reused as needed during the same transaction. As part of committing the transaction, the client sends such potential warranties to the store, which may issue these warranties, both back to this client and to other clients. The decision whether to issue a warranty properly depends on whether issuing the warranty is expected to be profitable. ### 3.8 Setting warranty terms Depending on how warranty terms are set, warranties can either improve or hurt performance. However, it is usually possible to automatically and adaptively set warranty terms to achieve a performance increase. Warranties improve performance by avoiding read prepares for objects, reducing the load on stores and on the network. If all read and write prepares to a particular store can be avoided, warranties eliminate the need even to coordinate with that store. Warranties can hurt performance primarily by delaying writes to objects. The longer a warranty term is, the longer the write is delayed. If warranty terms are set too long, writers may experience unacceptable delays. A good rule of thumb is that we would like writers to be delayed no more than they would be by read locks in a system using pessimistic locks. Excessively long warranties may also allow readers to starve writers, although starvation is mitigated because new warranties are not issued while writers are blocked waiting for a warranty to expire. Note that with pure OCC, writers can block readers by causing all read prepares to fail [43]; thus, warranties shift the balance of power away from writers and toward readers, addressing a fundamental problem with OCC. To find the right balance between the good and bad effects of warranties, we take a dynamic, adaptive approach. Warranty terms are automatically and individually set by stores that store the relevant objects. Fortunately, stores observe enough to estimate whether war- Warranty terms are likely to be profitable. Stores see both read prepares and write prepares. If the object receives many read prepares and few or no write prepares, a state warranty on that object is likely to be profitable. A similar observation applies to computation warranties. To determine whether to issue a warranty for an object, and its warranty term \( L \) in the case where a warranty is issued, the system plugs measurements of object usage into a simple system model. The system measures the rate \( W \) of writes to each object, and when there is no warranty issued on the object, it also measures the rate \( R \) of reads to the object. Both rates are estimated using an exponentially weighted moving average (EWMA) \(^{28}\) of the intervals between reads and writes. We modify EWMA to exponentially decay historical read-prepare data during warranty periods, when read prepares cannot be observed. Empirically, this modification improves the accuracy of rate estimation. To lower the overhead of monitoring, unpopular objects are flagged and given lower-cost monitoring as long as they remain unpopular. To ensure that the expected number of writes delayed by a warranty is bounded by a constant \( k_1 < 1 \) that controls the tradeoff between read and write transactions. The warranty term is set to \( k_1/W \) with a maximum warranty \( L_{\text{max}} \) used to bound write delays. Our goal is that warranties are profitable: they should remove load from the store, improving scalability. A warranty eliminates roughly \( RL \) read prepares over its term \( L \), but adds the cost of issuing the warranty and some added cost for each write that occurs during the term. The savings of issuing a warranty is positive if each write to an object is observed by at least \( k_2 \) reads for some value \( k_2 \), giving us a condition \( RL \geq k_2 \) that must be satisfied in order to issue a warranty. The value for constant \( k_2 \) can be derived analytically using measurements of the various costs, or set empirically to optimize performance. This way to set terms for state warranties also works for computation warranties, with the following interpretation: uses of a computation warranty are “reads” and updates to its dependencies are “writes”. The tension between write latency and read throughput can also be eased by using warranty refresh in addition to a maximum warranty term. The term \( L \) is computed as above, but warranties are issued to clients with a shorter term corresponding to the maximum acceptable update latency. The issuing store proactively refreshes each such warranty when it is about to expire, so the warranty stays valid at clients throughout its term. ### 3.9 Distributing warranties Warranties can be used regardless of how they get to clients and can be shared among any number of clients. Therefore, a variety of mechanisms can be used to distribute warranties to clients. One option for warranty distribution is to have clients directly query stores for warranties, but this makes the system less scalable by increasing load on stores. As shown in Figure 2, Stores will be less loaded if warranties are distributed via a content distribution network (CDN) that clients query to find warranties. Going a step further, applications can subscribe to warranties that match a given pattern, as shown in Figure 2. Stores automatically refresh warranties with later expiration times before the old warranties expire, by pushing these extended warranties either directly to clients or into the CDN. Warranty refresh makes it feasible to satisfy client requests with shorter warranty terms, consequently reducing write latency. This strategy for achieving high availability and high durability differs from that used in many current distributed storage systems, which use replication to achieve high availability, low latency, and durability. Those three goals are handled separately here. Distributing warranties through a CDN makes data objects highly available with low latency, without damaging consistency. Because the authoritative copies of objects are located at stores, a write to an object requires a round-trip to its store; the latency this introduces is ameliorated by the... support for relatively large transactions, in which communication with stores tends to happen at the end of transactions rather than throughout. To achieve high durability, stores should be implemented using replication, so that each “store” mentioned in this paper is actually a set of replicas. Since wide-area replication of stores implementing strong consistency will have poor performance, we assume store replicas are connected with low latency. ## 4 Transactions and warranties Warranties improve the performance of OCC by reducing the work needed during the prepare phase and by allowing phases to be eliminated entirely. ### 4.1 The warranty commit protocol When a transaction completes, the client performs a modified two-phase commit, illustrated in Figure 1 for both read-only and read-write transactions. In the prepare phase, the client sends the write set of the transaction (if any), along with any warranties in the read set whose term has expired. If all warranties in the read set can be renewed, the transaction may commit. Since outstanding warranties may cause the updates to be delayed, the store responds with a commit time indicating when the commit may be applied successfully. When the client receives a commit time from all stores, it checks to ensure the terms of the warranties it holds exceed the maximum commit time. If not, it attempts to renew these warranties beyond the commit time in an additional extend phase. If active warranties are obtained for all dependencies, the client sends the commit message, and the stores commit the updates at the specified time. ### 4.2 Avoiding protocol phases While a two-phase commit is required in the general case, performance can be improved by eliminating or combining phases when possible. For read-only transactions, the commit phase is superfluous, and clients executing transactions that involve only one store can combine the prepare and commit phases into one round-trip. The optimizations to 2PC that warranties make possible are summarized in Table 1. <table> <thead> <tr> <th>Stores</th> <th>Written</th> <th>Unexpired?</th> <th>Phases: Warranties</th> <th>OCC</th> </tr> </thead> <tbody> <tr> <td>1 +</td> <td>0</td> <td>Y</td> <td>0</td> <td>1</td> </tr> <tr> <td>1</td> <td>1</td> <td>Y/N</td> <td>1</td> <td>1</td> </tr> <tr> <td>2 +</td> <td>1</td> <td>Y</td> <td>1</td> <td>2</td> </tr> <tr> <td>2 +</td> <td>2 +</td> <td>N</td> <td>2</td> <td>2</td> </tr> </tbody> </table> Table 1: Warranties require fewer phases than traditional OCC in some cases (highlighted). The read-only (rows 1–2) and single-store optimizations (row 3) are available with or without warranties. However, unexpired warranties enable eliminating additional phases, shown by the two rows highlighted in gray. Row 1 shows that read-only transactions whose read set is covered by unexpired warranties may commit without communicating with stores—a zero-phase commit. This optimization matters because for read-biased workloads, most transactions will be read-only. Row 4 shows that transactions that read from multiple stores but write to only one store may commit in a single phase if their read set is fully warrantied. This single-phase optimization pays off if objects are stored in such a way that writes are localized to a single store. For example, if a user’s information is located on a single store, transactions that update only that information will be able to exploit this optimization. While warranties usually help performance, they do not strictly reduce the number of phases required to commit a transaction. Transactions performing updates to popular data may have their commits delayed. Since the commit time may exceed the expiration time of warranties used in the transaction, the additional extend phase may be required to renew these warranties beyond the delayed commit time, as shown in the final row. ## 5 Computation warranties A computation warranty is a guarantee until time $t$ of the truth of a logical formula $\phi$, where $\phi$ can mention computational results such as the results of method calls. We focus here on the special case of warranties generated by memoized function calls, where $\phi$ has the form $\text{o.f}(\vec{x}) = ?$ for some object $o$ on which method $f$ is invoked using arguments $\vec{x}$, producing a value to be obtained from the warranty. Note that the value returned by $f$ need not be a primitive value. In the general case, it may be a data structure built from both new objects constructed by the method call and preexisting objects. Our goal is that warranties do not complicate programmer reasoning about correctness and consistency. Therefore, when $f$ is a memoized method, a computation of the form $v = o.f(\vec{x})$ occurring in a committed transaction should behave identically whether or not a warranty is used to obtain its value. This principle has several implications for how computation warranties work. It means that only some computations make sense as computation warranties, and that updates must be prevented from invalidating active warranties. ### 5.1 Memoizable computations To ensure that using a computation warranty is equivalent to evaluating it directly, we impose three restrictions. --- 7 First, computation warranties must be deterministic: given equivalent initial state, they must compute equivalent results. Therefore, computations using a source of nondeterminism, such as input devices or the system clock, do not generate computation warranties. Second, we prevent memoization of any computation that has observable side effects. Side effects are considered to be observable only when they change the state of objects that existed before the beginning of the memoized computation. Importantly, this definition of “observable” means that memoized computations are allowed to create and initialize new objects as long as they do not modify pre-existing ones. For example, the top-N example from Section 3.5 computes a new object representing a set of items, and it may be convenient to create the object by appending items sequentially to the new set. Warranties on this kind of side-effecting computation are permitted. Enforcing this definition of the absence of side effects is straightforward in a system that already logs which objects are read and written by transactions. Third, a memoized function call reads from some set of objects, so updates to those objects may change its result, and may occur even during the same transaction that performed the function call. At commit time, the transaction’s write set is intersected with the read set of each potential warranty. If the intersection is nonempty, the potential warranty is invalidated. 5.2 Defending computation warranties Once a computation warranty is requested by a worker and issued by a store, the store must ensure that the value of the call stays unchanged until the warranty expires. Revalidation A conservative way to defend warranties against updates would be to delay all transactions that update objects used by the warranty. This approach is clearly safe because of the determinism of the warranty computation, but it would prevent too many transactions from performing updates, hurting write availability. Instead, we attempt to revalidate affected warranties when each update arrives. The store reruns the warranty computation and checks whether the result is equivalent to the result stored in the warranty. For primitive values and references to pre-existing objects (not created by the warranty computation), the result must be unchanged. Otherwise, two results are considered equivalent if they are semantically equal per the equals() method, which operates as in Java. Warranty dependencies In general, a warranty computation uses and thus depends on other warranties, whether state warranties or general computation warranties. For example, if the method top is implemented recursively (see Figure 3), the warranty for a call to top depends on warranties for its recursive calls. The dependencies between warranties form a tree in which computation warranties higher in the tree depend on warranties lower down, and the leaves are state warranties. Any warranty that has not expired must be defended against updates that could invalidate it. Defense is easy when the term of a warranty is contained within (a subset of) the terms of all warranties it depends on, including state warranties on all direct references to objects, because the validity of the higher-level warranty is implied by the defense of the lower-level warranties. In general, however, a warranty can have a longer term than some of its dependencies. Updates to those dependencies must be prevented if they invalidate the warranty, even if they are expired warranties. Conversely, it is possible to allow updates to warranty dependencies that do not invalidate the warranty. The implication is that it is often feasible to give higher-level warranties longer terms than one might expect given the rate of updates to their dependencies. For example, consider the recursive call tree for the method top(n, i, j) shown in Figure 3. If the request to see the top n items among the entire set is very popular, we would like to issue relatively long computation warranties for that result. Fortunately, updates to items (shown at the leaves of the call tree) that change their ranking might invalidate some of the warranties in the tree, but most updates will affect only a small part of the tree. Assuming that lower levels of the tree have short warranties, most updates need not be delayed much. 5.3 Reusing computation warranty values In the case where the warranty computation created new objects, it may be crucial for correctness of the computation that the objects returned by the warranty are distinct from any existing objects. This desired semantics is achieved when using a warranty computation result by making a copy of all objects newly created during the warranty computation. These objects are explicitly iden- Computation warranties are used whenever available to the client, to avoid performing the full computation. If the client is holding an expired warranty, or obtains an expired warranty from the CDN, it can use that expired warranty optimistically. At commit time, the expired warranty is revalidated during the prepare phase, exactly like a read prepare. 5.4 Creating computation warranties Whenever code at a client makes a call to a memoized method, the client searches for a matching computation warranty. If the client is not already holding such warranty, it may search using a CDN, if available, or request the warranty directly from the appropriate store. If the client cannot find an existing computation warranty, it performs the warranty computation itself. It starts a new transaction and executes the method call. As the call is evaluated, the transaction’s log keeps track of all reads, writes, and object creations performed by the call. When the call is completed, the result is recorded and the log is checked to verify that the call does not violate any of the restrictions outlined above. If the warranty is still valid, the call, value, and transaction log are gathered to form a complete warranty proposal. At commit time, if the warranty proposal has not already been invalidated by an update to its read set, the proposal is sent to the store. The store looks at the request and, using the same mechanism as for state warranties, sets a warranty term. For state warranties, terms are set individually for each object, but here the warranty identity is defined by the entire set of arguments to the memoized method. Finally, the computation warranty is issued to the requesting client and the store begins to defend the new warranty or warranties proposed by the client. 6 Implementation To evaluate the warranty mechanism, we extended the Fabric secure distributed object system [38]. Fabric provides a high-level programming model that, like the Java Persistence API, presents persistent data to the programmer as language-level objects. Language-level objects may be both persistent and distributed. It implements linearizability using OCC. Fabric also has many security-related features—notably, information flow control—designed to support secure distributed computation and also secure mobile code [5]. The dynamic security enforcement mechanisms of Fabric were not turned off for our evaluation, but they are not germane to this paper. We extended the Fabric system and language to implement the mechanisms described in this paper. Our extended version of Fabric supports both state warranties and computation warranties. Computation warranties were supported by extending the Fabric language with memoized methods. Client (worker) nodes were extended to use warranties during computation and to evaluate and request computation warranties as needed. The Fabric dissemination layer, a CDN, was extended to distribute warranties and to support warranty subscriptions. Fabric workers and stores were extended to implement the new transaction commit protocols, and stores were extended to defend and revalidate warranties. The previously released version of Fabric (0.2.1) contains roughly 44,000 lines of (non-blank, non-comment) code, including the Fabric compiler and the run-time systems for worker node, store nodes, and dissemination nodes, written in either Java or the Fabric intermediate language. In total, about 6,900 lines of code were added or modified across these various system components to implement warranties. Fabric ships objects from stores to worker nodes in object groups rather than as individual objects. State warranties are implemented by attaching individual warranties to each object in the group. Some features of the warranties design have not been implemented; most of these features are expected to improve performance further. The single-store optimization of the commit protocol has been implemented for base Fabric, but rows 3–5 of Table 1 have not been implemented for warranties. The warranty refresh mechanism is also not yet implemented. To simplify the work needed to defend computation warranties, the current implementation only generates warranties for computations that involve objects from a single store. Also, our implementation does not use the dissemination layer to distribute computation warranties. 7 Evaluation We evaluated warranties against existing OCC mechanisms, and other transactional mechanisms, primarily using three programs. First, we used the multiuser OO7 benchmark [13]. Second, we used versions of Cornell’s deployed Course Management System [10] (CMS) to examine how warranties perform with real systems under real-world workloads. Both of these programs were ported to Fabric in prior work [38]. Third, we developed a new benchmark that simulates a component of a social network in which users have subscribers. 7.1 Multiuser OO7 benchmark The OO7 benchmark was originally designed to model a range of applications typically run using object-oriented databases. The database consists of several modules, which are tree-based data structures in which each leaf of the tree contains a randomly connected graph of 20 objects. In our experiments we used the “SMALL” sized database. Each OO7 transaction performs 10 ran- dom traversals on either the shared module or a private module specific to each client. When the traversal reaches a leaf of the tree, it performs either a read or a write action. These are relatively heavyweight transactions compared to many current benchmarks; each transaction reads about 460 persistent objects and modifies up to 200 of them. By comparison, if implemented in a straightforward way with a key-value store, each transaction would perform hundreds of get and put operations. Transactions in the commonly used TPC-C benchmark are also roughly an order of magnitude smaller [52], and in the YCSB benchmarks [54], smaller still. Because OO7 transactions are relatively large, and because of the data’s tree structure, OO7 stresses a database’s ability to handle read and write contention. However, since updates only occur at the leaves of the tree, writes are uniformly distributed in the OO7 specification. To better model updates to popular objects, we modified traversals to make read operations at the leaves of the tree exhibit a power-law distribution with \( \alpha = 0.7 \) [11]. Writes to private objects are also made power-law distributed, but remain uniformly distributed for public objects. ### 7.2 Course Management System The CS Course Management System [10] (CMS) is a 54k-line Java web application used by the Cornell computer science department to manage course assignments and grading. The production version of the application uses a conventional SQL database; when viewed through the JPA, the persistent data forms an object graph not dissimilar to that of OO7. We modified this application to run on Fabric. To evaluate computation warranties, we memoized a frequently used method that filters the list of courses on an overview page. We obtained a trace from Cornell’s production CMS server from three weeks in 2013, a period that encompassed multiple submission deadlines for several courses. To drive our performance evaluation, we took 10 common action types from the trace. Each transaction in the trace is a complete user request including generation of an HTML web page, so most request types access many objects. Using JMeter [30] as a workload generator, we sampled the traces, transforming query parameters as necessary to map to objects in our test database with a custom JMeter plugin. ### 7.3 Top-subscribers benchmark The third benchmark program simulates a relatively expensive analytics component of a social network in which users have subscribers. The analytics component computes the set of 5 users with the largest number of subscribers, using the memoized top-N function described in Section 3.5. The number of subscribers per user is again determined by a power-law distribution with \( \alpha = 0.7 \). The workload consists of a mix of two operations: 98% compute the list of top subscribers, corresponding to viewing the home page of the service; 2% are updates that randomly either subscribe or unsubscribe some randomly chosen user. This example explores the effectiveness of computation warranties for caching expensive computed results. ### 7.4 Comparing with Hibernate/HSQLDB To provide a credible baseline for performance comparisons, we also ported our implementation of CMS to the Java Persistence API (JPA) [12]. We ran these implementations with the widely used Hibernate implementation of JPA 2, running on top of HyperSQL (HSQLDB), a popular in-memory database in READ COMMITTED mode. For brevity, we refer to Hibernate/HSQLDB as JPA. For JPA, we present results only for a single database instance. Even in this single-store setting, and even with Hibernate running in its optimistic locking mode, which does not enforce serializability, Fabric significantly outperforms JPA in all of our experiments. (Note that JPA in optimistic locking mode is in turn known to outperform JPA with pessimistic locking, on read-biased workloads [49, 17]). This performance comparison aims to show that Fabric is a good baseline for evaluating the performance of transactional workloads: its performance is competitive with other storage frameworks offering a transactional language-level abstraction. ### 7.5 Experimental setup Our experiments use a semi-open system model. An open system model is usually considered more realistic [48] and a more appropriate way to evaluate system scalability. Worker nodes execute transactions at exponentially distributed intervals at a specified average request rate. Consequently, each worker is usually running many transactions in parallel. Overall system throughput is the total of throughput from all workers. To find the maximum throughput, we increase the average request rate until the target throughput cannot be achieved. The experiments are run on a Eucalyptus cluster. Each store runs on a virtual machine with a dual core processor and 8 GB of memory. Worker machines are virtual machines with 4 cores and 16 GB of memory. The physical processors are 2.9 GHz Intel Xeon E5-2690 processors. The parameters \( k_1 \) and \( k_2 \) (Section 3.8) are set to 0.5 and 2.0, respectively; the maximum warranty term was 10 s. Performance is not very sensitive to \( k_1 \) and \( k_2 \). ### 7.6 Results We evaluated scalability using the OO7 benchmark with different numbers of stores. A “shared store” was reserved for the assembly hierarchies of all modules. The component parts of the modules were distributed evenly across the remaining stores. Only shared composite parts Figure 4: OO7 maximum throughput on a 2%-write workload as the number of stores increases. Warranties allow throughput to scale up with more stores. were placed on the shared store. Results presented are the average of three runs. Figure 4 shows maximum throughput in total transactions committed per second by 36 workers, as the number of stores increases. Error bars show the standard deviation of the measurements. As expected, adding stores has little effect on maximum throughput in base Fabric because the shared store is a bottleneck. Warranties greatly reduce load on the shared store allowing us to add roughly 400 tx/s per additional store. Note that the plot only counts committed transactions; the percentage of aborted transactions for Fabric at maximum throughput ranges from 2% to 6% as the number of stores increases from 3 to 7; with warranties, from 4% up to 15%. Table 2 reports on the performance of the CMS application in various configurations. The first three rows of Table 2 show that Fabric, without or without warranties, delivers more than an order of magnitude performance improvement over JPA. Although the JPA implementation enforces weaker consistency, Fabric’s more precise object invalidation helps performance as contention increases. Warranties help improve performance further, even in a single-store configuration. To evaluate how the system scales for a more realistic workload, we also ran CMS with 3 stores using Fabric and Warranties. Two stores each held data for multiple courses, while the third store contained metadata. As Table 2 shows, Warranties scale better than Fabric with the additional stores. Increases in throughput would be less compelling if they came at the cost of high latency. Table 2 also reports the latency measured with the CMS workload on the various systems. Fabric has similar latency with or without warranties. Because CMS was not designed with computation warranties in mind, the functions we designated to be memoized turn out not to have a significant impact on performance. They are relatively cheap to evaluate on cached objects, and the bookkeeping for computation warranties adds no noticeable overhead. Figure 5 shows how the performance of warranties is affected by the fraction of update transactions. Four different workload mixes were measured, each having a 94:6 shared-to-private traversal ratio and a 1:10 shared-to-private write ratio. When more than 10% of the transactions are updates, the cost of maintaining and issuing warranties in the current implementation is too high to obtain a performance improvement. The latencies at some of these throughputs are higher than Fabric’s, but still relatively low. At 2% and 5% writes, the latency of warranties is about 400 ms higher than Fabric’s but nearly the same as Fabric’s at 0% and 10% writes. Warranties can result in delaying transactions that are attempting to write to an object that has a warranty. We call this write delay. For all of the runs depicted in Figure 5, the median write delay is 0 ms. However, some fraction of transactions are forced to wait until one or more warranties expire. The more read-biased the transaction, the more frequently this happens. In the 2%-write workload, 70% of read-write transactions see no write delay. In the 10%-write workload, 82% see no write delay. Among those that encounter write delay, the delay is roughly uniformly distributed from 0 up to the max warranty length. <table> <thead> <tr> <th>System</th> <th>Stores</th> <th>Tput (tx/s)</th> <th>Latency (ms)</th> </tr> </thead> <tbody> <tr> <td>JPA</td> <td>1</td> <td>72 ± 12</td> <td>211 ± 44</td> </tr> <tr> <td>Fabric</td> <td>1</td> <td>3032 ± 144</td> <td>143 ± 120</td> </tr> <tr> <td>Warranties</td> <td>1</td> <td>4142 ± 112</td> <td>27 ± 27</td> </tr> <tr> <td>Comp. Warranties</td> <td>1</td> <td>4088 ± 189</td> <td>114 ± 30</td> </tr> <tr> <td>Fabric</td> <td>3</td> <td>4090 ± 454</td> <td>311 ± 175</td> </tr> <tr> <td>Warranties</td> <td>3</td> <td>5886 ± 124</td> <td>35 ± 4</td> </tr> </tbody> </table> Table 2: CMS throughput and latency on various systems. Both are averaged over 10 s at max throughput. 7.7 Computation warranties To further evaluate the impact of computation warranties, we ran the top-N benchmark with Fabric, state warranties, and with computation warranties. Because the performance of the recursive top-N strategy on Fabric and on state warranties was very poor, we used an alternate implementation that performed better on those configurations. Table 3 shows the average across three runs of the maximum throughput and the corresponding latency achieved in the system without any operations failing to commit during a 15 minute period. Computation warranties improve throughput by more than an order of magnitude. Since the computation warranty is on the value of the top 5 accounts rather than on each individual value used in computing the result, writes are not delayed as heavily as they are when using only state warranties. 8 Related work Many mechanisms for enforcing concurrency control have been proposed in the literature: locks, timestamps, versions, logs, leases, and many others [33, 22, 34, 46, 7, 21]. Broadly speaking, these can be divided into optimistic and pessimistic mechanisms. The monograph by Bernstein, Hadzilacos, and Goodman provides a broad overview from the perspective of databases [8]. Warranties are an optimistic technique, allowing clients to concurrently operate on shared data. Haerder [24] divides mechanisms for validating optimistic transactions into “forward” and “backward” techniques. Backward validation is a better choice for the distributed setting [3], so Fabric uses backward validation: transactions are aborted in the prepare phase if any object in the read set has been modified. Traditionally, most systems adopted serializability or linearizability as the gold standard of strong consistency [42, 8, 25]. But many recent systems have sacrificed serializability in pursuit of scalable performance. Vogels [53] discusses this trend and surveys various formal notions of eventual consistency. Much prior work aims to provide a consistency guarantee that is weaker than serializability; for example, causal consistency (e.g., [44, 39]) and probabilistically-bounded staleness [6]. Because this paper is about strong consistency, we do not discuss this prior work in depth. Leveraging application-level information to guide implementations of transactions was proposed by Lamport [33] and explored in Garcia-Molina’s work on semantic types [19], as well as recent work on transactional boosting [26] and coarse-grained transactions [31]. Unlike warranties, these systems use mechanisms based on commuting operations. A related approach is red–blue consistency [36], in which red operations must be performed in the same order at each node but blue operations may be reordered. Like warranties, Sinfonia [4] aims to reduce client–server round trips without hurting consistency. It does this through mini-transactions, in which a more general computation is piggybacked onto the prepare phase. This optimization is orthogonal to warranties. Warranties borrow from leases [21] the idea of using expiring guarantees, though important differences are discussed in Section 3.2. In fact, the idea of expiring state guarantees occurs prior to leases in Lampson’s global directory service [35]. We are not aware of any existing system that combines optimistic transactions with leases or lease-like mechanisms, against which we could meaningfully compare performance. A generalization of leases, promises [23, 29] is a middleware layer that allows clients to specify resource requirements via logical formulas. A resource manager considers constraints across many clients and issues time-limited guarantees about resource availability. Scalability of promises does not seem to have been evaluated. The tracking of dependencies between computation warranties, and the incremental updates of those warranties while avoiding unnecessary invalidation, is close to the update propagation technique used in self-adjusting computation [1], realized in a distributed setting. Incremental update of computed results has also been done in the setting of MapReduce [9]. The TxCache system [45] provides a simple abstraction for sharing cached results of functions operating over persistent data from a single storage node in a distributed system. As with the Fabric implementation of computation warranties, functions may be marked for memoization. TxCache does not ensure that memoized calls have no side effects, so memoized calls may not behave like real calls. Compared to Fabric, TxCache provides a weaker consistency guarantee, transactional consistency, requiring that all transactions operate over data that is consistent with a prior snapshot of the system. Escrow transactions [41] have some similarities to computation warranties. They generalize transactions by allowing commit when a predicate over state is satisfied. Certain updates (incrementing and decrementing values) may take place even when other transactions may be updating the same values, as long as the predicate still holds. Compared to computation warranties, escrow transactions support very limited predicates over state, and their goal is different: to permit updates rather than to allow the result of a computation to be widely reused. 9 Conclusions Strong consistency tends to be associated with the very real performance problems of pessimistic locking. While optimistic concurrency control mechanisms deliver higher performance for typical workloads, read prepares on popular objects are still a performance bottleneck. Warranties generalize OCC in a way that reduces the read-prepare bottleneck. Warranties address this bottleneck by allowing stores to distribute warranties on popular objects, effectively replicating their state throughout the system. Warranties can delay update transactions, but our results suggest that the delay is acceptable. Effectively, warranties generalize OCC in a way that adjusts the balance of power between readers and writers, substantially increasing overall performance. Computation warranties improve performance further by supporting memcached-like reuse of computations—but without losing strong consistency. Acknowledgments We would especially like to thank Robert Soulé for help setting up experiments and Nate Foster for good suggestions. Chin Isradisaikul also had good ideas for presentation, and we thank our shepherd Yuan Yu. We thank Hakim Weatherspoon for the use of Fractus cloud infrastructure provided by an AFOSR DURIP award, grant FA2386-12-1-3008. This project was funded partly by the Office of Naval Research (grant N00014-13-1-0089), by MURI grant FA9550-12-1-0400, by a grant from the National Science Foundation (CCF-0964409), and by an NDSEG Fellowship. This paper does not necessarily reflect the views of any of these sponsors. References
{"Source-Url": "http://www.cs.cornell.edu/andru/papers/warranties/nsdi14.pdf", "len_cl100k_base": 11918, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 49444, "total-output-tokens": 14849, "length": "2e13", "weborganizer": {"__label__adult": 0.00029778480529785156, "__label__art_design": 0.00026297569274902344, "__label__crime_law": 0.0002636909484863281, "__label__education_jobs": 0.0007724761962890625, "__label__entertainment": 7.617473602294922e-05, "__label__fashion_beauty": 0.00013589859008789062, "__label__finance_business": 0.00035643577575683594, "__label__food_dining": 0.0002880096435546875, "__label__games": 0.0005574226379394531, "__label__hardware": 0.0010194778442382812, "__label__health": 0.0004723072052001953, "__label__history": 0.0002655982971191406, "__label__home_hobbies": 9.173154830932616e-05, "__label__industrial": 0.00032711029052734375, "__label__literature": 0.0002715587615966797, "__label__politics": 0.00021767616271972656, "__label__religion": 0.0003845691680908203, "__label__science_tech": 0.043670654296875, "__label__social_life": 8.046627044677734e-05, "__label__software": 0.0117340087890625, "__label__software_dev": 0.9375, "__label__sports_fitness": 0.00021541118621826172, "__label__transportation": 0.0005245208740234375, "__label__travel": 0.00019037723541259768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66817, 0.03448]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66817, 0.30445]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66817, 0.92168]], "google_gemma-3-12b-it_contains_pii": [[0, 2108, false], [2108, 7857, null], [7857, 13022, null], [13022, 18719, null], [18719, 24031, null], [24031, 28290, null], [28290, 33497, null], [33497, 38304, null], [38304, 43629, null], [43629, 49124, null], [49124, 53077, null], [53077, 58124, null], [58124, 62485, null], [62485, 64301, null], [64301, 66817, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2108, true], [2108, 7857, null], [7857, 13022, null], [13022, 18719, null], [18719, 24031, null], [24031, 28290, null], [28290, 33497, null], [33497, 38304, null], [38304, 43629, null], [43629, 49124, null], [49124, 53077, null], [53077, 58124, null], [58124, 62485, null], [62485, 64301, null], [64301, 66817, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66817, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66817, null]], "pdf_page_numbers": [[0, 2108, 1], [2108, 7857, 2], [7857, 13022, 3], [13022, 18719, 4], [18719, 24031, 5], [24031, 28290, 6], [28290, 33497, 7], [33497, 38304, 8], [38304, 43629, 9], [43629, 49124, 10], [49124, 53077, 11], [53077, 58124, 12], [58124, 62485, 13], [62485, 64301, 14], [64301, 66817, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66817, 0.05578]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
ef9c030452c683a35387d3900045d8864a1621eb
CommandBoard: Creating a General-Purpose Command Gesture Input Space for Soft Keyboards Jessalyn Alvina, Carla Griggio, Xiaojun Bi, Wendy E. Mackay To cite this version: HAL Id: hal-01679137 https://hal.archives-ouvertes.fr/hal-01679137 Submitted on 9 Jan 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. CommandBoard: Creating a General-Purpose Command Gesture Input Space for Soft Keyboards Jessalyn Alvina 1 Carla F. Griggio 1 Xiaojun Bi 2 Wendy E. Mackay 1 1 LRI, Univ. Paris-Sud, CNRS Inria, Université Paris-Saclay F-91400 Orsay, France 2 Department of Computer Science Stony Brook University Stony Brook, New York, USA {alvina, griggio, mackay}@lri.fr; xiaojun@cs.stonybrook.edu Figure 1. CommandBoard creates a new command gesture input space above a soft keyboard. Users can: a) type ‘happy’ and use a dynamic guide to style it as bold; b) type ‘brightn’, draw an execute gesture and adjust the brightness slider; c) type ‘sans’, choose ‘sans mono’ and draw an execute gesture to change the font; d) type ‘color’, select yellow in the marking menu to change the brush color. ABSTRACT CommandBoard offers a simple, efficient and incrementally learnable technique for issuing gesture commands from a soft keyboard. We transform the area above the keyboard into a command-gesture input space that lets users draw unique command gestures or type command names followed by execute. Novices who pause see an in-context dynamic guide, whereas experts simply draw. Our studies show that CommandBoard’s inline gesture shortcuts are significantly faster (almost double) than markdown symbols and significantly preferred by users. We demonstrate additional techniques for more complex commands, and discuss trade-offs with respect to the user’s knowledge and motor skills, as well as the size and structure of the command space. INTRODUCTION Today’s mobile devices offer a variety of functionality with the emphasis on communication, games, and information consumption. Text entry comprises about 40% of mobile activity [4], and is usually accomplished with a soft keyboard. Originally designed to imitate physical keyboards, soft keyboards consist of a set of keys that can be tapped to input text. Gesture keyboards [24] offer a significantly faster alternative by letting users draw through each successive letter of the word. The resulting gesture is interpreted by a sophisticated recognition algorithm, which, when combined with the relevant dictionary, suggests the mostly likely word completions. Although very effective for producing text, these keyboards are not designed to issue commands. Instead, mobile devices rely on buttons, menus and dialog boxes, which restricts the available command set to what fits on a tiny screen. These recognition-based command techniques are easy to learn, but rarely offer a path toward recall-based expert use, even though many users regularly spend hours interacting with their mobile devices. An exception is a markdown language, which styles text by surrounding it with special symbols, such as _hello_ to italicize hello. This approach is efficient on a physical keyboard, since it avoids leaving the keyboard to move the mouse, but requires two keyboard swaps on a soft keyboard. Worse, users have no easy way to learn the symbol mappings. **CommandBoard** Our goal is to offer users a simple, yet powerful method of issuing commands from a mobile device. We introduce CommandBoard, which transforms a soft keyboard into an efficient, yet learnable command-entry tool. We build on a key insight from the gesture keyboard, i.e. that the system can recognize users’ gestures as they cross over the keys, and interpret them as text. CommandBoard generalizes this idea by creating an additional space, above the keyboard, for interpreting free-form gestures. We can think of this as extending a transparent interaction layer above the keyboard, where users can still see the usual display, but also issue gesture commands. This creates a general-purpose gesture command input space that supports a variety of command entry techniques. CommandBoard takes full advantage of the limited screen real estate on a smartphone. Figure 2 shows four discrete interaction spaces. As with gesture keyboard, the lower space is dedicated to generating text input or emoticons via tapping, crossing or dwelling on keys. Users can also swap keyboards, e.g. numeric or emoticon. CommandBoard includes additional features (marked in green), which let users specify command names for later execution via a gesture. **RELATED WORK** **Selecting Commands via Discrete Actions** Graphical user interfaces usually offer menus and toolbars to execute commands. This allows non-expert users to quickly learn which commands are available, but makes large or complex command sets difficult to access. Small menus and tool bars allow users to quickly access common items, but do not help with large command sets, which may require extensive search and multiple physical operations to find the desired item [20]. Accessing a multi-level hierarchical menu forces the user to move through a multi-step process of selecting the appropriate category before finding the desired leaf. Keyboard shortcuts let users select commands via a sequence of key presses. Although very efficient for experts, the process of transitioning from novice to expert can be very slow [14]. In fact, research from engineering psychology shows that the most commonly forgotten cognitive skill is performing multi-step action sequences [23]. **Selecting Commands via Gestures** In contrast to performing a sequence of discrete actions, drawing a gesture is a perceptual motor skill that involves a continuous response, with little memory loss over long periods of time [23]. Researchers have explored leveraging continuous gestures to select commands. A classic example is a Marking Menu [12], which supports executing commands via directional strokes. FlowMenu [8] extends the hierarchical marking menu to include parameter adjustment of an item. For example, a user can select a zoom command and specify the zoom value in the sub-menu, or even type the value (when the desired number does not exist) without lifting the pen. Li’s [15] world-wide deployment of a gesture-search system for smartphones demonstrated that users can successfully access their data via gestures, in their day-to-day mobile activities. Appert & Zhai [2] investigated using gestures as an alternative to keyboard shortcuts. They found that gesture shortcuts are easier to learn and recall thanks to their spatial and iconic properties. OctoPocus [3] offers better support for learning gesture shortcuts, acting as a dynamic guide to help users follow the correct gesture template: If the user hesitates, OctoPocus appears, showing the remaining possible ways to finish the gesture. This highlights the need for progressive feedforward and feedback to support incremental learning, to help novices transition to expert users. Augmenting Soft Keyboards In response to the large demand for text entry on mobile phones, phone manufacturers are developing keyboard extensions that offer new capabilities, from suggesting emoticons to general search. The latest version of Google Keyboard (now called Gboard) includes an in-context search engine. Users tap a button on the top-left of the keyboard to access the search engine, where they can directly type the search keyword, see the results, and share it. The TapBoard 2 [9] enables pointing via a soft keyboard, adding support to bimanual interaction. Arpege [7] supports multi-finger chord interaction, with dynamic guides to show novices where to place their fingers. Previous research also explored ways of supporting expressivity with soft keyboards: KeyStrokes [19] visualizes the unique typing style of the user on a colorful canvas; Buschek et al. [5] render the user’s typing variations into dynamic handwritten-looking output; and Expressive Keyboards [1] “recycle” users’ gesture-typing variations to generate and control rich, expressive output. As in any multitasking environment, switching between typing and issuing commands incurs interruption costs [22]. To reduce these costs, researchers have explored augmenting the keyboard with gesture-based commands. For example, Fucella et al. [6] propose using a two-finger touch gesture directly on top of a soft keyboard which lets the user move the caret and thus select a specified text. Command Strokes [11] employ additional buttons, e.g. COMMAND to enable keyboard shortcuts on gesture keyboards [10, 24]. Users can simulate using control keys on a physical keyboard, e.g. drawing a gesture that passes through COMMAND then C to perform COMMAND+C. CommandBoard moves one step further by turning the space above the keyboard into a general-purpose command gesture space, to support more sophisticated command generation. COMMANDBOARD TECHNIQUES We are interested in extending the interaction capabilities of gesture keyboards. By taking advantage of the otherwise-unused input space above the keyboard, CommandBoard significantly increases the keyboard’s power, letting users execute commands from the current command set, even if they are not visible on the screen. Note that we do not seek to define a single ‘best’ method of issuing commands, since different commands perform better in different contexts [16], but rather to create a keyboard that offers users a choice, based on their cognitive and motor skills, as well as the size and organization of the current command set. CommandBoard exists in harmony with existing command-generation techniques, such as menus and buttons, but also offers novices the opportunity to transition into power users, to execute commands fluidly at their fingertips. Before describing the CommandBoard concept, we first describe the properties of gesture keyboards. We then show how CommandBoard leverages these to provide users with a variety of simple, yet powerful command invocation techniques. Gesture Keyboards Gesture keyboards let users either tap each letter to enter text or gesture-type by drawing a line that connects all the letters (a word-gesture). Word-gesture recognition requires a multi-channel recognition engine [10], where the drawn shape is first compared to an “ideal” shape, i.e. from middle-point to middle-point of each key. The recognition engine then produces a list of word candidates. This list is shortened based on the actual location of the drawn word-gesture on the keyboard and weighted based on the language information. The recognition engine also considers temporal features: If the user slows down at a letter, the recognizer weights the word candidate higher. This recognition process is conducted progressively as the user moves her finger: at each touch, the gesture keyboard generates a list of at least four suggested words. The first is treated as the final result, the next three are displayed in the suggestion bar. The gesture keyboard may also auto-complete the current word-gesture, even before the user reaches the last letter of the intended word. If a word-gesture is drawn outside the keyboard space, the gesture keyboard captures the touch event but stops recognizing the word. When the user’s finger is released, the word output is cancelled and not rendered. By contrast, CommandBoard interprets a wide variety of gestures drawn above the keyboard. The next sections describe the two most basic techniques: TYPE-AND-EXECUTE and INLINE GESTURE SHORTCUTS. Type-and-Execute Commands Although novices may need to search through menus to discover the available commands, frequent users are usually familiar with both the commands and their names. Navigating through menus can be time-consuming, especially if the user forgets where the desired command is classified within a hierarchical menu. Some graphical user interfaces offer a search bar where typing the keyword or command name displays its location in a pull-down menu, if the command exists. Clicking on the search result issues the command, as if it had been selected from the menu. CommandBoard offers a similar function by letting the user type any command name from the keyboard, and then execute it directly by drawing the execute gesture in the display area above the keyboard. Keyword Search Since CommandBoard co-exists with traditional menus, we have prior knowledge about the current set of command names or command-gesture input space. When a user gesture-types a word, CommandBoard’s TYPE-AND-EXECUTE technique examines the first four words suggested by the keyboard recognition engine to see if any is a keyword in the command space. The TYPE-AND-EXECUTE technique treats each element of a compound command name as a search keyword. For example, both “line” and “spacing” can be used to find LINE SPACING. Users need only type the first unique letters of a long command name and the system will suggest the full command. For example, typing ‘brightn’ produces the BRIGHTNESS command in the command bar, which can then be invoked by performing the execute (\(\Lambda\)) gesture (Figure 1b). **Command Preview** If the keyword search is successful, the **TYPE-AND-EXECUTE** technique displays a preview: the full command name appears at the top of the screen. The keyboard continues to recognize the word-gesture as the user types. Thus, when the preview appears, the **TYPE-AND-EXECUTE** technique stores the keyword so that even if the recognized word changes as the user slides her finger upward, the command keyword remains the same. If the user continues gesture-typing, the preview disappears. If the user releases her finger within the keyboard space, the word appears as normal text. **Command Execution** If, after typing a recognized command name, the user continues to slide her finger upward, she enables the command-gesture input space. If she now performs a \( \Lambda \) gesture, the **TYPE-AND-EXECUTE** technique will execute the corresponding command. This allows the user to perform any command directly from the keyboard, as long as she already knows the command name. She need not learn any special commands beyond the **execute** gesture. We designed the **execute** gesture specifically so that it would not interfere with the GBoard’s technique for cancelling gestures. (The user cancels the current word by sliding her finger into the space above the keyboard and releasing it.) By contrast, **CommandBoard**’s **execute** gesture is designed to move up and then down, explicitly change direction, to reduce the risk of issuing unintended commands. The following examples illustrate various applications of **CommandBoard**’s **TYPE-AND-EXECUTE** technique: **Text Editor Application** Most text editing applications for mobile devices, such as Google Docs, offer only a limited number of commands. The process is also cumbersome: Selecting a menu command requires hiding the keyboard, navigating to and executing the command, then closing the menu and bringing back the keyboard, all before continuing to type. The **TYPE-AND-EXECUTE** technique simplifies command selection for text editors. The user can type the name of any menu item, as if it were a search word, and then execute it directly. For example, Figure 1c shows the user typing the word ‘sans’, then sliding her finger above the keyboard to perform the **execute** gesture, at which point **TYPE-AND-EXECUTE** applies the **SANS MONO** font to the selected text. **CommandBoard**’s **TYPE-AND-EXECUTE** technique also lets users type sub-menu names, and display their items in the command bar located above the suggestion bar. This is particularly useful when the menu item cannot be typed, for example the numbers shown in Figure 3. The user types ‘line’ and a preview for the **LINE SPACING** sub-menu appears (Figure 3a). The menu items then appear on the command bar. She sets the **LINE SPACING** value to 1.2 by crossing through it in the command bar and then performing the **execute** gesture (Figure 3c). **Doodle Application** Many mobile applications, such as iMessage and SnapChat, let users ‘doodle’ on their messages. **CommandBoard**’s **TYPE-AND-EXECUTE** technique lets users specify brush properties, such as changing the color or brush type, with a marking menu [13]. For example, in Figure 1d, the user types ‘color’. She slides her finger upward to reveal the **COLOR** marking menu, which brings up various brush color, and then moves down-left to select **YELLOW**. Note that a challenge in combining **CommandBoard** with a marking menu is deciding when a gesture should be interpreted as a ‘mark’. One solution is to require users to begin from the middle of the screen, which, given the phone’s limited screen real estate, would ensure sufficient space to move in all directions. **Parameterization** **CommandBoard**’s **TYPE-AND-EXECUTE** technique can also be combined with sliders to parameterize commands. For example, in Figure 1b, the user begins gesture-typing the ‘brightness’ command. Upon performing the \( \Lambda \) gesture, a slider appears, with the handle under her finger. Moving along the x-axis (left and right) adjusts the screen’s brightness, whereas moving along the y-axis (up and down) moves the slider’s position on the screen. **Inline Gesture Shortcuts** **CommandBoard**’s **INLINE GESTURE SHORTCUTS** let users invoke gesture shortcuts from the keyboard as they type. Instead of typing the command name, the user types the object of the command, for example, the text to be styled. The user then slides her finger above the keyboard, pausing to bring up the dynamic guide that shows the current set of possible commands (see Figure 1a). Users can benefit from motor memory to recall these gestures. As they become experts, they can perform the command gesture directly, without pausing for the guide. The following examples illustrate various applications of **CommandBoard**’s **INLINE GESTURE SHORTCUTS** technique: **Chat Application** Although soft keyboards are not specifically designed to support command input, users often use markdown languages to issue text styling commands. For example, typing an asterisk before and after a word (‘*help!*’) produces (help!). Markdown commands are effective keyboard shortcuts when using physical keyboards, because users avoid lifting their hands to move the mouse. On soft keyboards, however, markdown languages force users to switch from the alphabetic to the symbolic keyboard, disrupting their writing flow. Unfortunately, issuing styling commands as text can be cumbersome, especially if done often or multiple times in a row. CommandBoard’s inline gesture shortcuts offer a more efficient alternative, by executing a specialized gesture directly from the keyboard. In Figure 1a, the user wants to style the word ‘happy’. After writing it, she moves into the upper area, pauses to see several styling alternatives, and then follows the pigtail to execute the bold command. As before, the dynamic guide offers a path to help users develop their motor memory, becoming expert over time. CommandBoard’s markdown language is designed to be similar to those in existing applications, where the user writes a symbol before and after the word. Thus, writing ‘happy’ followed by a pigtail gesture generates “*happy*” on the text field buffer, which then will be rendered as **happy**. This enables users to style more than one word, by moving the caret in between the markdown symbols and insert more words. **Contacts Application** Most phones have a CONTACTS application that lets users tap on a contact to view the person’s details and then call, send an SMS, or use another communication app to communicate with that person. Users can also access a person’s details by typing her name in the search bar. CommandBoard’s inline gesture shortcuts let users issue commands from within the search bar, as soon as the desired result appears. For example, if ‘Mom’ exists in the contacts list, the user can gesture-type ‘Mom’, then slide up to the upper space and draw a pigtail gesture to call her. If the search produces multiple contacts the command bar displays the alternatives. For example, Figure 2 shows two contacts: Alice Brooke and Alice Waltz. Here, the user crosses through the Alice Brooke contact and then draws a pigtail gesture to call her. Note that both type-and-execute and inline gesture shortcuts are designed for efficiency, and rely on an experienced user’s ability to either recall the command name, or the associated gesture. Each technique provides scaffolding to help novice users learn, including the type-and-execute’s command bar and the inline gesture shortcuts’ dynamic guides. However, these techniques can only display a small number of commands, which makes them most useful when the current context significantly limits the command space. **EVALUATION** Standard mobile devices use icons, buttons and menus to access functionality, because these are easy for novice users to recognize and use. However, many experts prefer the efficiency of command-line interfaces, even though they require learning and subsequent recall of command names and syntax. One of the goals of CommandBoard is to bridge the gap between these two approaches, by supporting both recognition-and recall-based interaction, with a smooth transition between novice and expert use. We begin by examining “expert” behavior, with a focus on the efficiency of the technique. We use a common experimental strategy for simulating expert performance: we show the participant the correct action so that we measure only performance, not confounded by unmeasured memory issues. We sought an ecologically valid domain for testing CommandBoard’s ability to support both recognition and recall. We chose the markdown commands available in chat applications such as WhatsApp and Slack, since users can style their text by typing markdown symbols before and after the text (recall), with a “cheatsheet” in the menu if they forget the symbols (recognition). For evaluating expert behavior, markdown symbols offer a fairer, more realistic comparison than standard pull-down menus, which would be even slower. In the inline gesture shortcuts condition, users write a word and then draw a command gesture directly from the keyboard to style it, whereas in the markdown symbols condition, users type markdown symbols before and after the word to be styled. Although not a primary goal, we are also interested in whether or not users begin to learn gesture-command mappings, simply by using the technique. Our research questions include: 1. Are inline gesture shortcuts faster and more accurate than text-based markdown symbols? 2. Do users prefer CommandBoard’s inline gesture shortcuts? **METHOD** We conducted a two-part study, using a within-participants design, to compare CommandBoard’s inline gesture shortcuts technique to markdown symbols (see Figure 5). Part A is a one-factor experiment that compares speed and accuracy of expert users using these two techniques. Part B is a qualitative study designed to assess participants’ preferences as well as incidental learning with respect to each technique. Part B follows Part A, with the same participants, hardware and software. **Participants** We recruited 12 right handed participants (4 women, 8 men), aged 23-41. All use mobile phones daily. Two gesture-type daily; the others are non-users. Three sometimes use markdown symbols in existing chat applications; the rest do not. **Hardware and Software** We used two LG Nexus 5X (5.2" display) smartphones, running Android 7.1. We implemented CommandBoard as an Android application that lets users issue text-styling commands with inline gesture shortcuts, using the native Android gesture recognizer. The inline gesture shortcuts technique requires the user to draw through the letters of the indicated word on the keyboard. CommandBoard recognizes the word, and renders it on the screen. If the user continues the stroke above the keyboard, a semi-transparent overlay appears and the stroke is interpreted as a command gesture. The overlay displays an OctoPocus-like [3] dynamic guide indicating the gestures associated with possible styling commands. Lifting the finger applies the recognized gesture-command to the word output and the overlay disappears. Note: We removed OctoPocus’ dwell delay in the experiment to avoid confounding time measures. We also implemented the markdown symbols technique, which requires the user to type a specified symbol before and after the word to be styled. **Command-Set Design:** We created a command set consisting of six text-styling commands: underline, monospace, big, small, outline, and gradient color and mapped them to inline gesture shortcuts and markdown symbols. The inline gesture shortcuts set consists of six gestures chosen from [2] (see Figure 4). We ensured that these gestures do not overlap when displayed together in an OctoPocus-style dynamic guide using [17]. The markdown symbols set consists of six characters chosen from the second row of the symbol keyboard: @, #, $, %, &. and +. We ensured that none overlap with existing chat symbols from, e.g., WhatsApp and Slack. Mappings between gestures and markdown symbols are counter-balanced across participants using a Latin square. Figure 4. Gesture set: Grey circles indicate where to begin drawing. **Phrase Set Design:** We constructed two sets of 24 three-word phrases drawn from the Oxford Dictionary². The middle words are each four-five letters long, and end in 24 different letters of the alphabet (we exclude ‘j’ and ‘q’), to ensure gesture starting points are distributed evenly across the keyboard. We also balanced angles between stroke segments across the sets, to avoid unwanted performance effects [21, 1]. Eight words include acute angles, e.g. "menu"; eight include at least one obtuse angle, e.g. the ‘agi’ in "magic"; and eight include only 0° or 180° angles, e.g. "power". We used the 24 middle words to create two sets of 24 three-word phrases. We created two phrases around each middle word, using three-to-six letter surrounding words that make sense when read together as a phrase. For example, the first set includes ‘play video games’, and the second set includes ‘some video clips’. We distributed the first set of 24 phrases across the practice and experimental conditions of the experiment, and distributed the second set across the pre- and post-test conditions of the study. We counter-balanced for order within and across participants using a Latin square. **Procedure** Figure 5 shows the study design. Part A consists of four conditions, each comprised of two blocks of six trials, grouped by technique. Part B consists of a single recomposition task where users can freely choose the desired technique. <table> <thead> <tr> <th>Condition</th> <th>Feedback?</th> <th>Practice</th> <th>Experimental</th> <th>Pre-test</th> <th>Recombination Task</th> <th>Post-test</th> </tr> </thead> <tbody> <tr> <td>Gestures</td> <td>yes</td> <td>6 trials</td> <td>6 trials</td> <td>6 trials</td> <td>12 trials</td> <td>6 trials</td> </tr> <tr> <td>Symbols</td> <td>no</td> <td>[3]</td> <td>[3]</td> <td>[3]</td> <td>[3]</td> <td></td> </tr> </tbody> </table> Figure 5. Part A (Experiment): Each condition (Practice, Experiment, Pre-test, and Post-test) includes two blocks of six trials, one per technique, with three replications in the experimental condition. Part B (Study): Participants recompose 12 of their own messages, with free choice of technique. **Part A: Trial Description** Each trial begins by displaying a three-word phrase, with a styled middle word, e.g. play video games. The participant presses START, then retypes the phrase, using the indicated technique to style the middle word. This simulates the process of issuing styling commands during the flow of writing. To simulate “expert” behavior, each trial includes explicit instructions as to how to execute the command, removing the need for recall memory. Participants may preview styling results. Practice and experimental trials display the correct styling command, either the gesture to draw (Figure 6a, inline gesture shortcuts condition) or the symbols to type (Figure 6b, markdown symbols condition). This simulates expert performance by eliminating errors due to forgetting a gesture shape or markdown symbol. Conditions are separated by short breaks. **Practice Condition** Participants are exposed to two practice blocks, one per technique (inline gesture shortcuts and markdown symbols). Each block involves typing six three-word phrases, and styling the middle word. Each trial shows which inline gesture shortcuts or markdown symbols to use. In the inline gesture shortcuts condition, the gesture template appears as soon as the participant’s finger leaves the keyboard. Participants can retype phrases as often as they like, until they are comfortable performing the task quickly and reliably. An error message appears if they forget to apply the style or make a typing or styling error. Pressing CLEAN restarts the trial; DONE moves to the next trial. **Experimental Condition** Participants are exposed to two six-trial blocks, one per technique (inline gesture shortcuts and markdown symbols), for a total of 12 trials. Experimental trials are identical to practice trials, except that participants retype and style each three-word phrase three times (three replications), to provide a stable performance measure. **Pre- and Post-test Conditions** Participants begin with two blocks of six trials, one for each technique (inline gesture shortcuts and markdown symbols), counter-balanced for order within and across participants. Each trial displays the phrase to be typed including the styled middle word (see Figure 6c). Participants reproduce the styled phrase with each technique, with no feedback. This serves as a baseline measure of styling command recall. The pre- and post-test conditions are identical, but use phrases from the alternate phrase set. The pre-test offers an initial assessment of learning, how much they remember immediately after their first exposure to each technique. The post-test offers a second assessment, based on more extensive practice during the recomposition task. ²https://en.oxforddictionaries.com/ Part B: Recomposition Task After completing the Pre-Test condition in Part A, participants are asked to perform a more open-ended set of tasks, in order to assess their overall preferences for each technique. For greater ecological validity, we asked participants to check their smart phones and choose 12 recent messages to retype, avoiding ones they felt were too personal. Participants were free to change the text as they liked. We then asked them to re-compose these 12 messages, using either technique to style at least one word. We provided a ‘cheat sheet’ with the relevant markdown symbols for the **Markdown Symbols** technique, and displayed a dynamic guide with the relevant gestures for the **Inline Gesture Shortcuts** technique. Measures **Input Time** We measure *Input Time* in seconds for the phrase and each word-output, referred to as: *WO1, WO2, and WO3*. Note that *WO2* includes inserting the two markdown symbols. This measure allows us to assess the gesture-typing time for both **Inline Gesture Shortcuts** and **Markdown Symbols**. **Gesture-Typing and Command Selection Time** The participant must gesture-type the middle word and style it using **Inline Gesture Shortcuts** or **Markdown Symbols** (i.e. *WO2*). We capture the times spent in each sub-activity. We measure **Command Selection Time** (**Command Time**) and Gesture-Typing Time (**Typing Time**) in seconds. **Inline Gesture Shortcuts** We measure the time spent leaving the keyboard and drawing the gesture (**Command Time**). If a participant crosses the top border of the keyboard, below the suggestion bar, at event $k$, then **Command Time** and **Typing Time** are as follows: \[ \text{Command Time} = t(event_N) - t(event_k) \\ \text{Typing Time} = t(event_k) - t(event_0) \] **Markdown Symbols**: We measure the time spent writing the symbols before and after the word (**Command Time**) for *WO2*. Given an input $I$ is a sequence of touch events, where *I = \{event(x,y,t,action)\}_{0,N}*, if a participant starts gesture-typing the word at event $j$ (tagged as down) and lifts her finger at event $i$ (tagged as up) in *WO2*, then **Command Time** and **Typing Time** are as follows: \[ \text{Command Time} = t(event_i) - t(event_0) + t(event_N) - t(event_j) \\ \text{Typing Time} = t(event_i) - t(event_j) \] **Gap Time** We assess how long participants spend switching from writing a regular word (**WO1**) to a styled word (**WO2**) and back again (**WO3**). Given that an input $I$ is a sequence of touch events, *I = \{event(x,y,t,action)\}_{0,N}*, where $t$ is the timestamp, we measure gap time between each word-output as follows: \[ gap(WO_i, WO_{i+1}) = t(WO_{i+1}.event_0) - t(WO_i.\text{event}_N) \] **Errors** We count three types of error: typographical errors (**Typing Errors**), incorrect symbols or gestures (**Styling Errors**), or forgetting to style the middle word (**Missing Errors**). Note that **Typing Errors** and **Styling Errors** can occur in the same trial. A trial is considered correct when it has no errors. Data Collection We log all touch events and the recognized word output for each trial. We tag each touch event with one of five actions: **Shift, Tap, Down, Move, and Up**. **Shift** involves pressing a key and **Shift** involves holding down the keyboard shift key. The remaining actions identify the start (down), drawing phase (move) and completion (up) of a gesture. These measures allow us to compute speed, movement time and errors for each technique. Participants answer a five-point Likert-style questionnaire to assess their perceived accuracy, speed, ease-of-use, confidence, comfort, and enjoyment of each technique. We also take observational notes and debrief participants, with a particular focus on what the participants liked and disliked about the techniques and their strategies for styling their text. Results Experiment We collected a total of 432 experimental trials (12 **Participant** × 2 **Technique** × 6 trials × 3 replications). We removed one trial (P4) who gave up after repeated typing errors on the third word of one phrase. After determining we had no unwanted significant effects from the word sets, we ran a one-way repeated-measures analysis of variance for factor **Technique**, followed by Tukey HSD tests for post-hoc comparisons. **Input Time** The overall **Input Time** (trial completion time) is significantly affected by **Technique** ($F_{1,11} = 86.9, p < 0.0001$). This is due primarily to styling the middle word (**WO2**), as shown in Figure 7. ![Figure 7. Average time spent entering each word. **WO2** is the styled word, commands are significantly faster: almost double.](image) **Gesture-Typing and Command Selection Time** On average, participants significantly more time styling words with **Markdown Symbols** (mean 6.3s) than with **Inline Gesture Shortcuts** (3.3 seconds), with $F_{1,11} = 71.1, p < 0.0001$. When we break apart **Input Time** for *WO2* into time to select the command (**Command Time**) and time to gesture-type it (**Typing Time**), we find that participants spend significantly When the participants switch from writing the first word to applying the style (mean COMMAND TIME = 5.8s) than drawing gestures (mean COMMAND TIME = 1.5s) [F_{1,11} = 177.6, p < 0.0001] (Figure 8). However, they spend more time gesture-typing the styled word when using INLINE GESTURE SHORTCUTS (mean TYPING TIME = 1.8s) than MARKDOWN SYMBOLS (mean TYPING TIME = 0.6s) [F_{1,11} = 68.3, p < 0.0001]. This may be an artifact of the experimental design, since participants slowed down to check that they had gesture-typed the correct word, before drawing the styling gesture. In the long run, this may actually benefit the INLINE GESTURE SHORTCUTS technique, because slowing down improves the recognition process with gesture keyboards [10]. Recognized words are less likely to change when users slide into the command gesture input space. ![Figure 8. Average time spent gesture-typing (TYPING TIME) and issuing the command (COMMAND TIME). Participants drew quickly with INLINE GESTURE SHORTCUTS, but took significantly longer inserting MARKDOWN SYMBOLS.](image) ### Gap Time When the participants switch from writing the first word to applying a styling command to the second word, the gap duration (\(\text{GAP (WO}_{1}, \text{WO}_{2}\)) is significantly longer for MARKDOWN SYMBOLS (mean=1.9s) than for INLINE GESTURE SHORTCUTS (mean=1.2s) [F_{1,11} = 49.7, p < 0.0001]. This suggests that participants needed more time to consider which key to press when selecting markdown symbols, i.e. searching and pre-planning. However, when participants finish applying the styling command to the middle word, they spend significantly less time writing the third word when using MARKDOWN SYMBOLS (mean \(\text{GAP (WO}_{2}, \text{WO}_{3}\)) 0.9s) than when using INLINE GESTURE SHORTCUTS (mean \(\text{GAP (WO}_{2}, \text{WO}_{3}\)) 1.5s) [F_{1,11} = 128.4, p < 0.0001]. In the MARKDOWN SYMBOLS condition, they can already see if they have applied the correct command as they press the SPACE bar, whereas with INLINE GESTURE SHORTCUTS, they must check again after releasing their finger. This would be improved by displaying a progressive preview at the end of the dynamic guide, but was not made available during the experiment. ### Errors Participants made significantly fewer styling errors with INLINE GESTURE SHORTCUTS (mean STYLING ERRORS = 0.09) than with MARKDOWN SYMBOLS (mean STYLING ERRORS = 0.36), [F_{1,18} = 13.7, p = 0.0035]. However participants using INLINE GESTURE SHORTCUTS were somewhat more likely to forget to actually style the word – INLINE GESTURE SHORTCUTS (mean MISSING ERRORS = 0.3) versus MARKDOWN SYMBOLS (mean MISSING ERRORS = 0.04), [F_{1,11} = 26.7, p = 0.0003]. This is probably an artifact of the experimental setting, since in actual use, users would not ‘forget’ to style a word if they wanted to. We did not find a significant effect of TECHNIQUE on accuracy [F_{1,11} = 49.7, p = 0.47], which suggests that using gestures to style text does not interfere with typing accuracy. ### Preferences Study #### Pre- and Post-test Results We ran a one-way repeated measures analysis of variance for factor TECHNIQUE to compare STYLING ERRORS during the Pre- and Post-test conditions. We found a significant interaction effect [F_{1,11} = 4.4, p = 0.0375] for STYLING ERRORS. In the Pre-test, the average STYLING ERRORS for INLINE GESTURE SHORTCUTS and MARKDOWN SYMBOLS are 0.52 and 0.32, respectively. In the Post-test, the average STYLING ERRORS for INLINE GESTURE SHORTCUTS and MARKDOWN SYMBOLS are 0.35 and 0.38, respectively. Prior to the pre-test condition, participants had practiced both techniques, but always with a direct indication of how to perform the gesture or what symbols to type. The pre-test was the first time that participants had ever tried executing the commands without help. Participants remembered half the gestures and two thirds of the symbols from the previous practice and experiment condition. The post-test was given after participants had experimented with their choice of technique to recompose their own text, and participants remembered almost two thirds of the gestures. This suggests that we should study longer term use of CommandBoard’s INLINE GESTURE SHORTCUTS technique, to see how well it supports incremental learning over time. #### Recomposition Task Results Although given a choice between using MARKDOWN SYMBOLS or INLINE GESTURE SHORTCUTS, all participants chose gestures. They ignored the cheat-sheet showing all markdown symbols and their resulting styles. P11 was the exception, but he only looked at the cheat-sheet to get inspiration from the style examples. We observed three strategies when styling words with gestures: thinking of a style first, and then using OctoPocus to follow the corresponding gesture; activating OctoPocus first, and then deciding on a style from the options; and performing a learned gesture to apply a style with no hesitation. A few participants explained the rationale behind their styling. P2 recomposed a text message to his wife with a shopping list, and he used all available styles to highlight the ingredients they had to buy for a salad. P8 associated word categories to styles: big meant positive or a lot, small meant negative or uncertain, underline was important or certain, and gradient were for special words. P12 also assigned meanings to different styles: gradient for opinions, outline for time-related words, underline for important words, and big for emphasis in general: “Big is the most useful.” P11 on the other hand cared less about the different styling options, and mostly focused on emphasizing important words: “I think I didn’t really want to choose a specific [style], I just wanted to add an effect on it so it looks different from other words.” **Self-reported Quantitative Measures** Participants were asked to rate six statements on a 5-point Likert scale, from strongly disagree (1) to strongly agree (5). The statements asked whether the current technique helped them to style text: a) accurately, b) quickly, c) easily, d) confidently, e) comfortably, and f) enjoyably. Table 1 lists the medians of each question for both techniques. An analysis using a Friedman test showed that participants reported significantly stronger agreement for **in line gesture shortcuts** compared to **markdown symbols** on five statements: ACCURATELY ($p = .34$, $\chi^2(1) = 4.5$), QUICKLY ($p = .007$, $\chi^2(1) = 7.36$), EASILY ($p = .11$, $\chi^2(1) = 6.4$), COMFORTABLY ($p = .002$, $\chi^2(1) = 10$) and ENJOYABLY ($p = .001$, $\chi^2(1) = 11$). <table> <thead> <tr> <th>Statement</th> <th>Symbols</th> <th>Gestures</th> </tr> </thead> <tbody> <tr> <td>ACCURATELY*</td> <td>2.5</td> <td>4.0</td> </tr> <tr> <td>QUICKLY*</td> <td>2.0</td> <td>4.0</td> </tr> <tr> <td>EASILY*</td> <td>2.4</td> <td>4.0</td> </tr> <tr> <td>CONFIDENTLY</td> <td>2.5</td> <td>4.0</td> </tr> <tr> <td>COMFORTABLY*</td> <td>2.0</td> <td>4.0</td> </tr> <tr> <td>ENJOYABLY*</td> <td>2.0</td> <td>4.5</td> </tr> </tbody> </table> Table 1. Participant ratings of how each technique helped them to style text (median values; * indicates a significant difference). Participants significantly preferred gestures in all categories except 'confidently'. **User Preferences and Debriefing** The final questionnaire asked participants to rate their preference between the two techniques on a 5-point scale (from strong preference for **markdown symbols** to strong preference for **inline gesture shortcuts**). All participants preferred gestures: 10 indicated a strong preference, 2 indicated some preference. Six participants expressed their preference in terms of **typing flow**, explaining that **inline gesture shortcuts** best supported styling without interrupting their text composition process. P2 commented “I didn’t use the symbols at all in the chat. It’s troublesome to have to switch the keyboard, doing it in the beginning and at the end. It really breaks the flow of the writing. While with the gesture, it’s always there, I can pick what I want on the go.” P9 wrote “It’s enjoyable to use and in coherent with using gestures to type words.” Participants differed with respect to recognition and recall. Four participants found **inline gesture shortcuts** easier to recall than **markdown symbols**: “I used big, small, underline in the recomposition task, so I remember them” (P1). However, four participants had difficulty recalling the **inline gesture shortcuts** mappings: P9 said “the paths of gestures are difficult to link with their meanings”, and P6 said “If the gestures are well designed or designed by the user himself, it could be quite natural.” Two participants felt more comfortable creating mnemonics for **markdown symbols** rather than **inline gesture shortcuts**, despite their overall preference for **inline gesture shortcuts**: “It’s easier to remember the symbols for each type (+ for big; $ for the underlined because of the line in the $).” Three participants also appreciated the convenience of recognizing gestures with OctoPocus rather always having to recall them: “this is nice, I don’t have to remember and just follow [the OctoPocus guideline].” Finally, we asked participants to suggest other applications for **inline gesture shortcuts**. Four participants suggested using gestures to add emojis: “I have 5-10 smiles that I always use, so I think it’d be nice if I can use the gesture to get it. Because it’s bothersome having to change to another keyboard view (emoticon), so if I can do it with the gesture it’d be cool.” (P3). Two thought of command shortcuts: “If you like a webpage, you could do a special gesture to bookmark it. To refresh the page, you could use a circular gesture, etc.” (P2). Other suggested applications were changing lines, replacing the enter key, taking notes and changing fonts. **DISCUSSION** On mobile devices, users issue commands via buttons, menus and dialog boxes and enter text with soft keyboards. Given the sheer amount of time users spend with their smartphones and other mobile devices, it seems odd that they willingly accept such limited forms of interaction. **CommandBoard** provides an additional set of interaction techniques, offering users both power and simplicity when executing commands. **CommandBoard** repurposes the unused output space above the keyboard to accept gestures that invoke commands; extends gesture keyboards with command gestures, without disrupting existing command invocation techniques; and makes it easy for users to discover gesture-command mappings. We view **CommandBoard** as a strategy for transforming mobile devices into powerful, personalized tools, with which users can benefit from a variety of new command entry techniques, using text, gestures or both. By building upon the gesture keyboard, we leverage its powerful machine-learning algorithms, and offer an easy way to incorporate successful gesture-based command invocation techniques from the research literature. **CommandBoard** offers a variety of alternatives, depending upon the task, the user’s cognitive and motor skills, and the size and structure of the current command space. **CommandBoard** can also handle parametric commands, such as typing ‘brightness’ followed by the execute gesture (5) displays a slider with continuous control of the screen brightness level. It could also be combined with the **Expressive Keyboard** [1], which would allow gestures to dynamically modify command parameters. Although we expected that **CommandBoard** would perform better than current markdown commands, we were surprised by the size of the effect (approximately twice as fast) and by how much the participants preferred it over standard markdown commands. We believe this is because users can fluidly style their text without interrupting the flow of their typing. Users not only avoid switching modes, but also avoid selecting text, the most time-consuming aspect of text editing [6]. One important issue is how best to support the transition from novice to expert use. Expert users must not only know that a command exists, but must also be able to recall either the command name, or the associated gesture shortcut. We provide several types of dynamic guides to help novices learn, and to help intermittent users when they forget. For example, we present likely commands in the command bar, or show gesture paths, either Marking Menu-style directions [12] or free-form OctoPocus-style gestures [3]. The pre- and post-test results from the experiment indicate that users can easily learn gesture commands simply through the process of using them. We expected relatively low post-test scores, since users had only limited experience with the gestures during the practice and experimental conditions. Even so, users clearly made fewer errors in the post-test, which suggests that even limited experience can improve gesture recall. We should be able to further reduce learning time and enhance the transition from novice to expert performance by letting users define their own memorable, yet recognizable gestures [18], e.g. with [17]. In future work, we plan to conduct a longitudinal field study of CommandBoard, in order to more thoroughly investigate this transition from novice to expert. The experiment restricted CommandBoard’s inline gesture shortcuts to styling one word at a time. For example, the ‘happy’+pigtail gesture generates happy. However, sometimes users want to apply a style to multiple words. One option would be to combine CommandBoard with other advanced text selection techniques, such as selecting a phrase with a two-finger gesture on top of the keyboard [6]. Gesture grammars can also combine command gestures with selection-scope gestures. For example, in Type-and-Execute, after sliding her finger to the input space above the keyboard, the user could specify the scope of the selection with a marking menu that includes last word, last sentence, last paragraph, and select all. All gesture-based menu systems, including Marking Menus and OctoPocus, run into visual overload problems when forced to display more than about 16 menu items at a time. This is commonly addressed by creating hierarchical menus or by restricting the command set to a more limited context. CommandBoard faces these same limitations, but they can be partially mitigated when CommandBoard is used in conjunction with other recognition-based techniques. On the other hand, using CommandBoard to type commands on the keyboard and then select a parameter in the gesture-input space can help users access the full range of available commands. Ideally, mobile application developers should be able to use CommandBoard as a service i.e. library when developing an application. The gesture keyboard captures the gesture input from the users and recognizes the word, which is then processed by the underlying application. The developers define the command set from the current set of menu items, the CommandBoard technique implementation, and the basic gesture-command mappings in the underlying application. CONCLUSION We present CommandBoard, which lets users gesture-type commands directly from a soft keyboard on a mobile device. We transform the otherwise unused area above the keyboard into an alternate, gesture-based input space. CommandBoard is a general approach for adding gesture-based commands to a soft keyboard, that builds upon gesture-typing to offer several different techniques for generating commands. This paper proposes two basic techniques that address different trade-offs. The user can: 1. gesture-type a known command name followed by an execute gesture; or 2. move from the gesture-type keyboard directly to the command-gesture input space above, to execute a unique command gesture. When practiced by experts, both techniques require the user to recall either the command name, or its associated gesture. However, CommandBoard also provides a path from novice to expert use, by offering two types of dynamic guides. The command bar offers suggested command names that users can select by crossing through, and the OctoPocus-style dynamic guide offers progressive feedforward, to suggest alternative command-gesture mappings. We ran an experiment to compare CommandBoard’s command invocation to a conventional markdown language for styling text. We found that participants are not only significantly faster with CommandBoard (almost double), but also participants significantly preferred CommandBoard (unanimously). We also demonstrate how we can leverage the gesture keyboard to extend its functionality while preserving its accuracy, simplicity, and accessibility. We implemented several applications of each technique to illustrate the variety of ways that it can be incorporated into different context. Finally, we show how CommandBoard can be combined with traditional command selection techniques, including pull-down menus and tool bars, as well as adopting more innovative gesture-based menu techniques, such as Marking Menus and OctoPocus. In future work, we plan a longitudinal study to see how easily users learn CommandBoard over time and how they balance the trade-offs it offers between recognition-based novice performance, and recall-based expert performance. We also seek new ways of helping users personalize their mobile devices, by letting them define their own gestures and customize their commands. Ultimately, we see CommandBoard as offering a path towards simpler, yet significantly more powerful personal devices. ACKNOWLEDGMENTS This work was supported by European Research Council (ERC) grant n° 321135 CREATIV: Creating Co-Adaptive Human-Computer Partnerships. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01679137/file/CommandBoard-HAL.pdf", "len_cl100k_base": 11472, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 43145, "total-output-tokens": 14899, "length": "2e13", "weborganizer": {"__label__adult": 0.0006432533264160156, "__label__art_design": 0.01277923583984375, "__label__crime_law": 0.00027680397033691406, "__label__education_jobs": 0.01371002197265625, "__label__entertainment": 0.0010194778442382812, "__label__fashion_beauty": 0.0006017684936523438, "__label__finance_business": 0.0005049705505371094, "__label__food_dining": 0.0005846023559570312, "__label__games": 0.00283050537109375, "__label__hardware": 0.01259613037109375, "__label__health": 0.0009860992431640625, "__label__history": 0.0009641647338867188, "__label__home_hobbies": 0.0005731582641601562, "__label__industrial": 0.0004322528839111328, "__label__literature": 0.00212860107421875, "__label__politics": 0.0002351999282836914, "__label__religion": 0.0007863044738769531, "__label__science_tech": 0.346435546875, "__label__social_life": 0.0004172325134277344, "__label__software": 0.239990234375, "__label__software_dev": 0.3603515625, "__label__sports_fitness": 0.0003552436828613281, "__label__transportation": 0.0006232261657714844, "__label__travel": 0.00025534629821777344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61316, 0.02714]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61316, 0.51275]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61316, 0.86922]], "google_gemma-3-12b-it_contains_pii": [[0, 1132, false], [1132, 3575, null], [3575, 7833, null], [7833, 14077, null], [14077, 19499, null], [19499, 25710, null], [25710, 31318, null], [31318, 36442, null], [36442, 42236, null], [42236, 48744, null], [48744, 54775, null], [54775, 60047, null], [60047, 61316, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1132, true], [1132, 3575, null], [3575, 7833, null], [7833, 14077, null], [14077, 19499, null], [19499, 25710, null], [25710, 31318, null], [31318, 36442, null], [36442, 42236, null], [42236, 48744, null], [48744, 54775, null], [54775, 60047, null], [60047, 61316, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61316, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61316, null]], "pdf_page_numbers": [[0, 1132, 1], [1132, 3575, 2], [3575, 7833, 3], [7833, 14077, 4], [14077, 19499, 5], [19499, 25710, 6], [25710, 31318, 7], [31318, 36442, 8], [36442, 42236, 9], [42236, 48744, 10], [48744, 54775, 11], [54775, 60047, 12], [60047, 61316, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61316, 0.05714]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
64e1269461b019b49204f1db00aa7de37a6248ff
Automatic Generation of Specialized Direct Convolutions for Mobile GPUs Citation for published version: https://doi.org/10.1145/3366428.3380771 Digital Object Identifier (DOI): 10.1145/3366428.3380771 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: GPGPU '20: Proceedings of the 13th Annual Workshop on General Purpose Processing using Graphics Processing Unit General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Automatic Generation of Specialized Direct Convolutions for Mobile GPUs Naums Mogers University of Edinburgh Edinburgh, United Kingdom naums.mogers@ed.ac.uk Valentin Radu University of Edinburgh Edinburgh, United Kingdom vradu@inf.ed.ac.uk Lu Li University of Edinburgh Edinburgh, United Kingdom lu.li@ed.ac.uk Jack Turner University of Edinburgh Edinburgh, United Kingdom j Turner@ed.ac.uk Michael O’Boyle University of Edinburgh Edinburgh, United Kingdom mob@inf.ed.ac.uk Christophe Dubach University of Edinburgh Edinburgh, United Kingdom christophe.dubach@ed.ac.uk Abstract Convolutional Neural Networks (CNNs) are a powerful and versatile tool for performing computer vision tasks in both resource-constrained settings and server-side applications. Most GPU hardware vendors provide highly tuned libraries for CNNs such as Nvidia’s cuDNN or ARM Compute Library. Such libraries are the basis for higher-level, commonly-used, machine-learning frameworks such as PyTorch or Caffe, abstracting them away from vendor-specific implementation details. However, writing optimized parallel code for GPUs is far from trivial. This places a significant burden on hardware-specific library writers which have to continually play catch-up with rapid hardware and network evolution. To reduce effort and reduce time to market, new approaches are needed based on automatic code generation, rather than manual implementation. This paper describes such an approach for direct convolutions using Lift, a new data-parallel intermediate language and compiler. Lift uses a high-level intermediate language to express algorithms which are then automatically optimized using a system of rewrite-rules. Direct convolution, as opposed to the matrix multiplication approach used commonly by machine-learning frameworks, uses an order of magnitude less memory, which is critical for mobile devices. Using Lift, we show that it is possible to generate automatically code that is \( \times 10 \) faster than the direct convolution while using \( \times 3.6 \) less space than the GEMM-based convolution of the very specialized ARM Compute Library on the latest generation of ARM Mali GPU. 1 Introduction Convolutional neural networks [8] (CNN) dominate the field of computer vision and image processing. Due to the availability of parallel accelerators such as mobile GPUs, we are able to use CNNs to perform these complex tasks on resource-constrained mobile devices. However, modern neural networks are computationally demanding, yielding large memory footprints and slow inference times, which has slowed their adoption in embedded settings. CNNs typically have several convolution layers and one or more fully connected layers. Most of their execution time is spent in convolutions [15]. Convolutions slide several kernels across a multi-channel 2D image (e.g., the first input has typically three channels, RGB). The layers configurations vary significantly across networks and even among layers of the same network. For instance, in VGG architectures [19], the first convolutional layer operates on a 224\( \times \)224 image with 3 channels while the 7th layer operates on an 112\( \times \)112 image with 128 channels. The size and shape of convolutional kernels might also vary between networks or layers. This diversity in convolution input shapes represents a significant challenge for high-performance software engineers. In fact, obtaining good performance for rapidly evolving networks, hardware and workloads is a significant engineering challenge for library vendors relying on hand-coded solutions. Most neural network libraries, such as Caffe [13] for CPU and CuDNN [5] for Nvidia GPU, solve this issue by expressing convolutions as General Matrix Multiplication (GEMM), since heavily optimized implementations are readily available. While this approach leads to high-performance, it significantly increases the required memory footprint, which can be a problem when running on mobile devices. For instance, a GEMM implementation of the 2nd convolutional layer of VGG requires 116 MB of memory for a single image while the direct convolution requires only 13 MB. If a large neural network processes multiple images (e.g., a video stream) at once, the device memory is quickly filled up. Support for high performance direct convolution is not as common given that it is a specialized operation compared to the more generic GEMM. As a result, vendors typically do not invest as much effort in providing a tuned direct convolution implementation. As an example, the ARM Compute Library implementation of direct convolution only supports a handful of convolution shapes and is actually \( \times 10 \) slower than its GEMM counterpart on the ARM Mali GPU. This calls for an automatic approach that produces highly-specialized high-performance code for direct convolutions. GPGPU '20, February 23, 2020, San Diego, CA, USA This paper presents an automatic code generation approach for direct convolution based on Lift. Lift expresses algorithms using a high-level data-parallel Intermediate Representation (IR). A system of rewrite rules optimizes Lift expressions to specialize on the target architecture. More specifically, this paper shows how CNN convolutions are expressed and optimized in Lift. This is achieved by exploring a parametric space which includes tile sizes, amount of padding, amount of data reusage and the number of sequential operations performed by a thread. A series of constraints is automatically produced to restrict the search to valid combinations of tuning parameters (e.g., input size must be divisible by tile size). Using the latest generation of ARM Mali GPU, we demonstrate that Lift generates high-performance direct convolution code that is on average ×10 faster than the ARM compute library direct convolution implementation, while using ×3.6 less space than GEMM-based convolution provided by the same library. To summarize, the main contributions are: - Show how we leverage Lift to express the convolutional layers of neural networks; - Evaluate a large optimization space of 1,000 points with Lift; - Produce code automatically for direct convolution that achieves a speedup of ×10 and memory saving of ×3.6 over the ARM’s own high performance library on the ARM Mali GPU. 2 Motivation 2.1 Convolutional Neural Networks (CNNs) CNNs are the tool of choice for most computer vision problems. They are composed of stacked layers of convolutions over multi-channel inputs, where each layer produces a feature map per convolution kernel. In computer vision, the first image passed to a convolutional neural network has three channels, red, green and blue channels. They get transformed in scale and value based on the learned kernel weights at each layer. For classification tasks, the output tensor flattens each feature map into a vector and passes it to one or more affine transforms. These affine transformations account for very little of the total inference time. For example, in SENet [11], the most recent ImageNet winner, convolution accounts for 99.99% of total floating point operations. Therefore, this paper focuses primarily on the convolution operation. 2.2 Direct Convolution Each convolution kernel has a receptive field of spatial size \((\text{kernel}_{\text{width}} \times \text{kernel}_{\text{height}})\) in 2D, usually square, \(K \times K\), and a depth to match the input number of channels \(C\), across all \(M\) kernels. On an input image size \(C \times H \times W\) the direct convolution is performed with nested loops: \[ \text{for } h \text{ in } 1 \text{ to } H \text{ do for } w \text{ in } 1 \text{ to } W \text{ do for } c \text{ in } 1 \text{ to } C \text{ do for } o \text{ in } 1 \text{ to } M \text{ do} \] \[ \text{sum } = 0; \quad \text{for } i \text{ in } 1 \text{ to } K \text{ do for } j \text{ in } 1 \text{ to } K \text{ do} \] \[ \text{sum } += \text{input}[c][h+i][w+j] \times \text{kernels}[o][i][j][c]; \text{ output}[o][w][h] = \text{sum}; \] 2.3 GEMM The convolution operation is commonly implemented as matrix multiplication due to the availability of highly optimized GEMM routines available in libraries for both CPU (openBLAS) and GPU (CLBlas, cuDNN). This is achieved through the image to column \(\text{im2col}\) transformation, which unrolls each kernel into a row to form a matrix of all kernels, and each patch of image is mapped to a column to form another large matrix with a number of columns equal to the times each kernel should be convoluted over the image for the direct convolution approach. Matrices formed by each input channel are concatenated row-wise. The entire convolution operation is performed by executing one single dot product over these two large matrices using an efficient GEMM routine. Figure 1 presents the \(\text{im2col}\) operation, where two \(3 \times 3\) kernels are convoluted on a single channel \(5 \times 5\) image. With direct convolution, the image has 25 elements and the two kernels have 9 elements each. To perform GEMM, kernels are unrolled into two rows, and through \(\text{im2col}\) the input is mapped to the input-patch matrix which is \(9 \times 1\) larger than the original image. In total, this simple convolutional layer requires at least \(9 \times \) more memory for the GEMM method than it would otherwise with the direct convolution. 2.4 Memory footprint Figure 2 shows the actual run-time memory footprint required by the largest layer in the most popular deep neural networks. GEMM requires consistently more memory than direct convolution (one order of magnitude) due to increased memory size of the transformed input, which can be a limitation when deploying on mobile and embedded devices. 3 Background: the LIFT System The design goal of LIFT is to raise the programming abstraction and enable automatic performance optimizations on massively parallel accelerators, such as GPUs. LIFT provides a high level Intermediate Representation (IR) [20], and a compiler that automatically translates the high level IR to low level target code. The LIFT IR is functional where operations are side-effect free, enabling composition of LIFT primitives naturally. Optimizations choices are encoded using a system of rewrite rules that capture the algorithmic and hardware-specific optimizations. 3.1 LIFT Abstractions LIFT includes hardware agnostic algorithmic primitives, and low-level primitives which encodes specific hardware details. 3.1.1 Algorithmic primitives The main algorithmic primitives supported by LIFT and used in this paper are listed below. These algorithmic primitives only express what need to be computed, shielding programmers from any hardware-specific details. \[ \begin{align*} \text{map} &: (f : T \rightarrow U, \text{in} : [T]_n) \rightarrow [U]_n \\ \text{reduce} &: (\text{init} : U, f : (U,T) \rightarrow U, \text{in} : [T]_n) \rightarrow [U]_1 \\ \text{zip} &: (\text{in} : [T]_n, \text{in2} : [U]_n) \rightarrow ([T,U]_n) \\ \text{split} &: (m : \text{int}, \text{in} : [T]_n) \rightarrow ([T]_{m/n})_m \\ \text{join} &: (\text{in} : ([T]_m)_n) \rightarrow [T]_{m\times n} \\ \text{slide} &: (\text{size} : \text{int}, \text{step} : \text{int}, \text{in} : [T]_n) \rightarrow ([T]_{\text{size}})_{n\times \text{size}+\text{step} \times \text{step}} \\ \text{pad} &: (l : \text{int}, r : \text{int}, \text{value} : T, \text{in} : [T]_n) \rightarrow [T]_{l+n+r} \\ \text{transpose} &: (\text{in} : ([T]_m)_n) \rightarrow ([T]_n)_m \\ \text{reorder} &: (f : \text{int} \rightarrow \text{int}, \text{in} : [T]_n) \rightarrow [T]_n \\ \text{let} &: (f : T \rightarrow U, \text{input} : T) \rightarrow U \end{align*} \] In order to support the generation of code for parallel accelerators, LIFT introduces low-level primitives that are tightly coupled with the hardware-specific programming model. We review briefly the main OpenCL primitives that are used in this paper to target a mobile GPU. **Map & reduce** LIFT exposes variation of the map primitive corresponding to the OpenCL programming model: map\text{Wrg} and map\text{Cl}. These assign computation to the workgroups and local thread, respectively. These primitives take an additional parameter specifying the dimension in which to map the computation in the thread iteration space. Sequential versions of the reduction and map primitives also exist in the form of map\text{Seq} and reduce\text{Seq}. **Vectorization** LIFT provides as\text{Vector} and as\text{Scalar} which cast scalar arrays to vector types (e.g. float4) and vice versa. vectorize is provided to vectorize any scalar operator. **Address Spaces** LIFT expresses OpenCL address spaces using to\text{Global} and to\text{Private}, which force the enclosed function to write its results into either address spaces. Private memory usually corresponds to registers while global refers to off-chip GPU RAM accessible by all threads. 3.2 Example Stencil Program This section reviews how LIFT expresses stencil computations [10], which forms the basis for convolutions. Listing 1 shows an example code using LIFT primitives to express a stencil computation. The function stencil2D takes a \(3 \times 3\) weight array and a 2D image array. **Listing 1. Example of a 2D Stencil** ```python def stencil2D(weights : [(float)_3], inputData : [(float)_width.height]) -> { mapWrg(\theta)(mapLcl(\theta))(neighborhood) -> { join(toGlobal(id, reduceSeq(toPrivate(id, 0, 0)), (acc, (l,r)) -> (acc + l \times r), zip(join(neighborhood), join((weights))))), slide2D(((3, 3),(1,1), inputData))) } } ``` An example indexing function is stridedIndex(s), which orders elements with a stride s thus mapping an element i to position \(i/n+s*(i/n)\). Finally, the 1st primitive bounds an input value to a scope which is used in LIFT to express reusage as we will see later. The type of 1st is similar to that of a function application. The LIFT compiler mostly handles 1D primitives. Higher-level abstractions for multi-dimensional arrays [10] can be built by reusing 1D primitives. For instance, we can define map, slide and pad that operates on 2D arrays as follows: \[ \text{map2D}(f, \text{input}) = \text{map}(x \rightarrow \text{map}(f, x), \text{input}) \\ \text{slide2D}((\text{size1}, \text{size2}), (\text{step1}, \text{step2}), \text{input}) = \\ \text{map}(\text{transpose}, \text{slide}((\text{size1}, \text{step1}), \text{map}(\text{row} \rightarrow \text{slide}((\text{size2}, \text{step2}), \text{row})), \text{input})) \\ \text{pad2D}(l, r, t, b, \text{value}, \text{input}) = \\ \text{transpose}(\text{map}(\text{col} \rightarrow \\ \text{pad}(t, b, \text{value}, \text{col}), \text{transpose}(\text{map}(\text{row} \rightarrow \\ \text{pad}(l, r, \text{value}, \text{row})), \text{input}))) \] Figure 3. Visualization of the 2D Stencil Example in Listing 1 ![Visualization of the 2D Stencil Example in Listing 1](image) 3.4 Optimization through Rewrites Lift uses rewrite rules to encode optimization choices. This section briefly discusses two examples of such rewrites. 3.4.1 Tiling Tiling improves locality and enables work distribution to independent groups of threads. When tiling the input data of convolutions, care must be taken to ensure that the tiles overlap. To achieve this, tiling of convolutions is achieved by simply reusing the slide2D. This optimization is encoded using the following rewrite rule: \[ \text{slide2D}((\text{size1}, \text{size2}), (\text{step1}, \text{step2}), \text{input}) \leftrightarrow \text{map2D}(f, \text{slide2D}((\text{ts1}, \text{ts2}), (\text{ts1-step1}, \text{ts2-step2}), \text{slide2D}((\text{size1}, \text{size2}), (\text{step1}, \text{step2}), \text{input}))) \] This rewrite matches a function \( f \) applied to the results of a slide2D. The function \( f \) could be performing a convolution as in the example from listing 1. In order to perform the tiling optimization, this rewrite replaces the matched expression by two levels of nested slide2D and a map2D applied to \( f \). The first slide2D at the bottom is the original one producing a 2D array of neighborhoods. The second one on top is the actual tiling of size \( \text{ts1} \times \text{ts2} \) which is performed by sliding the tile in 2D. The step is equal to the desired tile size minus the original step. This results in a 2D array of overlapping tiles containing 2D neighborhoods. The function \( f \) is finally mapped in 2D over each tile. 3.4.2 Vectorization Vectorization is another example of an important optimization that highly benefits GPUs such as the ARM Mali GPU by using vector loads and stores and the built-in dot operator. The following rewrite expresses this optimization: \[ \text{map}(f, \text{input}) \leftrightarrow \text{asScalar}(\text{asVector}(f), \text{asVector}(\text{input}))) \] When a function \( f \) is mapped, it is possible to vectorize the function with the Lift vectorize primitive. The asVector cast the input scalar array into vector type while asScalar does the reverse. 4 Direct Convolution in Lift We now describe how a convolutional layer is expressed in Lift and introduces the low-level optimizations applied. 4.1 High-level Lift Expression Listing 3 shows the Lift expression of a convolution layer with three inputs. \( \text{kernelSWeights} \) contains the weights of all the kernels across the width, height and input channels. \( \text{kernelSBiases} \) are the biases, one per kernel. \( \text{inputData} \) contains the layer’s input which is a 3D array (width \( \times \) height \( \times \) input channels). \( \text{padSize} \) is a tuple of four values that specifies how much padding is required in each direction by the layer specification. \( \text{kernelSstride} \) specifies by how much each kernel is displaced across the input (the step). The output data is a set of feature maps represented as a 3D array with the outer dimension corresponding to the number of kernels. This Lift program in listing 3 consists of three steps. First, data is padded with zeros as per the configuration of the layer. Then, we slide in 2D across the padded input along the two spatial dimensions (inputWidth and inputHeight) producing the sliding windows. Finally, convolution is performed using a combination of Lift primitives. First, we map over each sliding window using... def convLayer(kernelsWeights : [[[float]inputChannels, kernelWidth, kernelHeight], numKernel, kernelsBiases : [float]numKernel], inputData : [[[float]inputChannels, inputWidth, inputHeight], numKernel], padSize : (int, int, int, int), , kernelStride : (int, int)) val paddedInput = pad2D(padSize, value = 0, inputdata) val slidingWindows = slide2D(kernelHeight, kernelWidth, kernelStride, 1, kernelStride, 2, paddedInput) map2D(slidingWindow -> map((singleKernelWeights, singleKernelBias) -> reduce(init = singleKernelBias, f = (acc, (x, w)) -> {acc + x * w}, zip(join(join({slidingWindow}), join(join({singleKernelWeights}))), zip(kernelsWeights, kernelsBiases)), slidingWindows)) Listing 3. High-level Lift expression of convolutional layer Figure 4. Visualization of the low-level Lift expression. map2D on line 7. Then, kernelsWeights and kernelsBiases are zipped together on line 11 and mapped over on line 8. On line 9, we finally reduce over the flattened and zipped slidingWindow and singleKernelWeights. The zipping of the slidingWindow and singleKernelWeight ensures that the reduction operates on pair of corresponding elements from both arrays. The reduction operator multiplies the corresponding elements and adds to the accumulator which is initialized with a singleKernelBias. 4.2 Low-Level Lift Expression As shown in Listing 3, convolution is expressed as a set of reductions of sliding windows. However, in popular deep CNNs such as VGG, ResNet and GoogleNet, most convolutional layers are wide to such extent that the whole input does not fit in the cache (e.g., L2). We address this issue by tiling the input and splitting reduction in two steps. The first GPU kernel, kernel splits the input, tiles each sliding window of each tile into chunks and reduces each chunk to a single value. This resulting vector of values per sliding window is reduced to one final value in the second GPU kernel. To ensure that the tiles fit perfectly with the input sizes, extra padding might be required on the input using another GPU kernel before processing the data. Conversely, an extra GPU kernel might be required at the end to crop back the output. We discuss all four stages below. 4.2.1 Padding The padding expression has a dual purpose. First, it pads the input with zeros along all four edges as per the neural network architecture. Secondly, it zero-pads the input across the right and bottom edges so that the resulting array can be perfectly tiled. The amount of padding p is determined automatically by a constraint solver and is explained later. ### Table 1. OpenCL dimension sizes defined in terms of tuning parameters <table> <thead> <tr> <th>Dimension</th> <th>Size</th> </tr> </thead> <tbody> <tr> <td>Workgroup dim. 1</td> <td>Number of tiles in the input</td> </tr> <tr> <td>Workgroup dim. 0</td> <td>Number of kernel groups</td> </tr> <tr> <td>Thread dim. 1</td> <td>Number of sliding window groups in a tile</td> </tr> <tr> <td>Thread dim. 0</td> <td>Number of sliding window partitions</td> </tr> </tbody> </table> 4.2.2 Partial Convolution Figure 4 presents an overview of the partial convolution algorithm. Acquiring input image and a set of convolutional kernels, we split the image into tiles and kernels – in kernel groups. Each combination of a tile and kernel group is processed by a single work group. Then, a window of the spatial size $\text{kernelWidth} \times \text{kernelHeight}$ is slid across the tile. This results in a set of sliding windows, which at this point are just virtual views into data. Each sliding window is flattened across two spatial dimensions and input channels, and split into chunks. Each chunk is processed sequentially by a single thread. Each thread can process chunks from more than one sliding window. Each kernel is split into chunks accordingly; kernels are flattened across three dimensions and split. Each sliding window chunk is coupled with corresponding chunks in each of the kernels in the group. A thread processes each pairing of the input chunk with the kernels in a kernel group. Processing each input-kernel chunk pair involves multiplying input values and corresponding weights, and summing the resulting vector. Thus, each sliding window is reduced to a vector of values, corresponding to each chunk in the sliding window. This is partial reduction; another expression further reduces the vector to each value resulting in a full convolution of each sliding window to a single output value. Listing 4 shows our Lift algorithm. First, the input is tiled using $\text{slide2D}$ and the 2D array of tiles is flattened (line 5). The tile size is controlled by the parameter $\theta$ and the stride is calculated to minimize the amount of tile overlap: $$\text{tilingStride} = \theta - (\text{kernelWidth} + \text{kernelHeight} - \text{kernelStride})$$ We express convolution within each tile by nesting a second $\text{slide2D}$ on line 6. This new five-dimensional view of the input data is further transformed using the inner expression on line 8. The 3D sliding window and convolutional kernels are represented as flat vectors; this simpler data layout enables coalescing of data. accesses using \texttt{reorder}, an important GPU optimization that improves locality. The elements are virtually reordered with the stride of \texttt{windowSize}/\omega, where \omega refers to the size of the partial window processed by one thread. The resulting stride is the number of threads processing the same window, ensuring each thread access consecutive elements. The window is vectorised with vector length \nu which is important for the Mali GPU. Finally, windows are split in groups; each thread will process chunks from the whole group of sliding windows. Lines 18-21 express mapping of parallel computations onto OpenCL threads; for the sizes of the respective work group dimensions, see 1. In dimension 1, each work group processes one input tile; in dimension 0, each work group is assigned one group of convolutional kernels. The grouping of kernels is expressed on line 16; the size of a kernel group is controlled by the parameter \kappa. In local dimension 1, threads are assigned an input window group. In local dimension 0, threads are assigned a chunk of each input window in a window group and a set of corresponding chunks of a group of kernels. By reading the input window chunk only once and reusing it for \kappa kernels within the same thread, we reduce the number of reads by a factor of \kappa; by reading the kernels once and reusing them for \sigma sliding windows within the thread, we further reduce the number of reads by a factor of \sigma. By iterating across the fastest changing dimension 0 in the innermost loop, we ensure that the quad threads access consecutive window chunks; thanks to the prior coalescing now stored in the view, quad threads access consecutive locations in memory further reducing the number of reads by a factor of four. The reduction of the partial window across several kernels is expressed on line 24; the accumulator is initialised to a vector of zeros on line 25 and the input to \texttt{reduceSeq} on line 33 and are reused from kernel weights. The reduction of the partial window across several kernels is expressed on line 24. The accumulation is initialised to a vector of \kappa zeros on line 25 and the input to \texttt{reduceSeq} on line 33 is an array of tuples of partial window elements and corresponding elements from kernel weights. The \texttt{let} primitive on line 27 ensures that the input values are fetched into the private memory once on line 33 and are reused across iterations of the sequential loop on line 28. 4.2.3 Summing partial results The third expression completes the convolution by reducing the partial weighted sums of each window. Each work group processes 4.2.4 Cropping The final expression reverses the effect of the extra padding performed in the first expression. It crops the output using pad2D with negative values for padding sizes. The amount of horizontal and vertical cropping is calculated as: \[ \text{cropSize} = \frac{\rho}{\text{kernelStride}} \] The cropSize is guaranteed to be whole by the slide constraint discussed later in section 5.2. 5 Space Exploration When exploring the search space of possible implementation, we leverage rich algorithmic information captured by the LIFT IR. Type safety and provable correctness of rewrite rules allow to automatically explore structural code transformations that would otherwise require costly static analysis. LIFT supports symbolic parameter values into the types. Parameter tuning consists of finding valid combinations of tuning values, replacing them at the type level and generating a specialized implementation. This leads to GPU kernels that are specialized for the given input parameters and tuning values. 5.1 Tuning parameters and rewrite rules Table 2 shows the tuning parameters. <table> <thead> <tr> <th>Symbol</th> <th>Parameter</th> </tr> </thead> <tbody> <tr> <td>( \theta )</td> <td>Input tile size</td> </tr> <tr> <td>( \rho )</td> <td>Optimizational padding size</td> </tr> <tr> <td>( \kappa )</td> <td>Number of kernels per workgroup</td> </tr> <tr> <td>( \sigma )</td> <td>Number of sliding windows per thread</td> </tr> <tr> <td>( \omega )</td> <td>Sequentially processed input elements</td> </tr> <tr> <td>( \upsilon )</td> <td>Vector size</td> </tr> </tbody> </table> Table 2. Convolution expression tuning parameters Unrolling During rewriting, LIFT optionally unrolls the innermost reduction of the partial convolution. The compiler also removes the loops over work group or work item indices where the corresponding dimensions sizes are the same as the number of elements being processed. 5.2 Constraint inference The expressiveness of LIFT and the complex space produced by rewriting results in a high number of dependent and independent parameters which is hard to manual analysis. To address the problem of parameter validation, we used automatic constraint inference based on the information encoded in the IR and the type system. By traversing the AST, we collect variables from types and parameters, and infer continuous and discrete constraints on the parameter values. A constraint is expressed as a record specifying the condition that must hold true and the list of parameters the condition is imposed upon. We present examples of the constraints that are automatically derived from a LIFT expression. Algorithmic Algorithmic constraints are inferred based on the type of an IR primitive and the values of its parameters. Satisfying such constraints is required for producing semantically correct results. For the split primitive, the inferred constraint is as follows: \[ \text{split} : (m : \text{int}, \text{in} : [T]_n) \Rightarrow n \% m = 0 \] This constraint ensures that the split input is divisible evenly into chunks of \( m \) elements. The compiler traverses the arithmetic expression of the condition \( n \% m = 0 \) and collects all the parameters; they are marked as co-dependent. asVector imposes a similar constraint to that of split: \[ \text{asVector} : (m : \text{int}, \text{in} : [T]_n) \Rightarrow n \% m = 0 \] slide comes in two conceptual flavours based on the constraints it imposes on the variables. The slideStrict requires that the sliding window covers perfectly the input: \[ \text{slideStrict} : (\text{size} : \text{int}, \text{step} : \text{int}, \text{in} : [T]_n) \Rightarrow (n - \text{size}) + 1 = 0 \] slideStrict must be used for tiling, when the semantic correctness of the expression must be preserved for all parameter values. For kernel sliding, we use the normal slide since sliding is allowed to produce partial results; a notable example is the first layer of AlexNet [14]. Hardware The specifications of the target hardware impose the constraints on the maximum amount of threads in a single dimension, work group size, total memory allocated and maximum single we use the ARM Compute Library (v19.02) with the Graph API, All unique convolutional layer configurations of VGG-16 Table 3. To explore the space of valid parameter value combinations for a Lift compiler to also generate OpenCL host code that sets up the de- vices. This demonstrates that our approach based on automatic code generation outperforms a human expert. 6 Experimental Methodology **Code generation** The Lift compiler is used to generate the code that runs on the GPU. We use an extended version of the Lift compiler to also generate OpenCL host code that sets up the de- device, compiles the GPU code, sends/retrieves the data and executes the GPU code. For each layer configuration, we generated 1000 randomly chosen implementations that satisfy all the constraints. As a baseline to evaluate the performance of our generated code, we use the ARM Compute Library (v19.02) with the Graph API, implementing the same layers and running these on the GPU by indicating of as the target from the API. All the ARM compute library results are produced using ARM’s built-in auto-tuner. **Benchmarks** To evaluate the code generated, we use all nine unique layer configurations of the VGG-16 model [19]. This network is well-studied performance in literature and has higher resource requirements than others such as ResNet and GoogleNet [3]. Table 3 presents the layer configurations. All results are validated by using a fixed random input and comparing the output with that of PyTorch. **Platform** In this paper, we target the ARM Mali-G72 (12 cores) mobile GPU using the HiSilicon Kirin 970 SoC running Debian GNU/Linux 9.8. The highest frequency (767MHz) was used. **GPU execution time** For our own results, we evaluate GPU execu- tion time using the cl_event associated with the kernel launches. For the ARM compute library, GPU execution time is measured by intercepting all OpenCL calls using our own profiler, which is an OpenCL wrapper library. The library automatically grabs the cl_event associated with each OpenCL kernel launch or creates one on the fly if required. This is done in a fully transparent way and does not influence the application being profiled. This allows us to reuse the exact same methodology for measuring execution time for the Lift generated GPU code and the ARM compute library. The numbers reported are the sum of all the GPU kernels involved in the operations of a convolutional layer, including the time to pad the input and crop the outputs. 7 Evaluation This section explores the performance of the automatically gen- erated direct convolution in Lift. A comparison is given against the best hand-written library for the ARM Mali GPU: the ARM Compute Library. 7.1 Comparison with ARM Compute Library Table 3 shows the execution times of the Lift-generated OpenCL kernels and the ARM Compute Library direct convolution and GEMM implementation. Both these versions have been auto-tuned using the tools provided by the ARM Compute Library. As evident from the results, the Lift-generated code is always faster than the ARM Compute Library direct convolution and more space-efficient than its GEMM method. Furthermore, in some cases it is actually on par or better than the highly tuned GEMM implementation. Figure 5 shows the performance of the Lift generated code expressed as throughput – amount of useful outputs generated per second – compared to that of direct and GEMM-based convolution from the ARM Compute Library. For every layer, Lift is faster than the ARM Compute library direct convolution and is $\times10$ faster on average. While Lift kernels achieve only $\times0.7$ the throughput of the GEMM-based implementation, the memory consumption is $\times3.6$ less and is close to that of the vanilla direct convolution. This demonstrates that our approach based on automatic code generation outperforms a human expert. 7.2 Multi-objective optimization Depending on application, priorities in neural network inference optimization might shift. In a resource-bound system such as a mobile GPU that is shared among multiple tasks, low memory foot- print is required; for time-critical tasks, throughput or latency are to be prioritized. Figure 6 demonstrates how search space exploration allows for multi-objective optimization to cater for various budgets: advancing the Pareto frontier results in a set of implementation candidates to choose from statically or at runtime for specific time and space requirements. In the case of VGG layer 2, the compiler might prioritize space efficiency by using 25 MBytes to compute results in 100 ms; when the memory budget is bigger, the compiler can prefer the 77 ms kernel that uses 31 Mbytes of space. Populating a sizeable Pareto set made possible thanks to the exploration of the tuning parameter search space, performed in a safe way thanks to constraint inference. Compared to libraries that depend on sets of handwritten kernels, a compiler can adapt to finer differences in the workload and target hardware. <table> <thead> <tr> <th>Layer</th> <th>Input</th> <th>Conv</th> <th>ARM Direct</th> <th>ARM GEMM</th> <th>Lift</th> </tr> </thead> <tbody> <tr> <td>0</td> <td>3x224x224</td> <td>64x3x3</td> <td>38.61</td> <td>2.98</td> <td>9.09</td> </tr> <tr> <td>2</td> <td>64x224x224</td> <td>64x3x3</td> <td>852.03</td> <td>80.14</td> <td>77.08</td> </tr> <tr> <td>5</td> <td>64x112x112</td> <td>128x3x3</td> <td>426.22</td> <td>37.94</td> <td>40.65</td> </tr> <tr> <td>7</td> <td>128x112x112</td> <td>128x3x3</td> <td>906.66</td> <td>88.09</td> <td>69.60</td> </tr> <tr> <td>10</td> <td>128x56x56</td> <td>256x3x3</td> <td>452.48</td> <td>23.73</td> <td>58.90</td> </tr> <tr> <td>12</td> <td>256x56x56</td> <td>256x3x3</td> <td>975.63</td> <td>60.45</td> <td>84.75</td> </tr> <tr> <td>17</td> <td>256x28x28</td> <td>512x3x3</td> <td>546.63</td> <td>22.30</td> <td>46.07</td> </tr> <tr> <td>19</td> <td>512x28x28</td> <td>512x3x3</td> <td>1201.93</td> <td>58.78</td> <td>94.83</td> </tr> <tr> <td>24</td> <td>512x14x14</td> <td>512x3x3</td> <td>31.14</td> <td>17.13</td> <td>19.8</td> </tr> </tbody> </table> 7.3 Analysis of the Best Point We now analyze one of the best points that we found using the 7th layer of VGG as an example. Table 4 shows the best tuning parameters found together with the thread local sizes for the GPU kernel responsible for performing a partial convolution. These parameters show that a workgroup processes a tile which can fit 9 sliding windows. 4 out of 128 kernels are processed by a workgroup, enabling reuse of the input data multiple times, without adding too much register pressure; 3 out of 9 sliding windows are processed by each thread, enabling reuse of the weight data. The amount of padding is also quite minimal, which avoids unnecessary work. We also see that this point is vectorized which is good for memory loads on the Mali-G72 architecture. 8 Related Work Several deep learning frameworks have recently been developed. Most of these frameworks rely on high-level graph-based representations of neural networks [1, 3, 13, 18] to allow for automatic differentiation. Such graphs are too high-level to be mapped optimally to specific hardware, so frameworks rely on hand-written code provided by hardware vendors, as found in Intel’s MKL-DNN, Nvidia’s TensorRT and ARM’s Compute Library. To address this, multi-level graph representations such as MXNet, XLA and TVM [3, 4, 17] have also been proposed, allowing subgraph and dataflow optimization to be made device-specific. TensorComprehensions [24] make use of the polyhedral compilation model to perform operation optimization and scheduling, but so far only target CUDA-capable GPUs. Depending on the target hardware, MXNet either provides handwritten layer implementations which lack portability or using BLAS libraries such as Atlas, MKL, CuBLAS and OpenBLAS. These libraries are also constrained in how much they can adapt to the target hardware relying just on tuning and handwritten code selection. Another code generator with auto-tuning is Latte [21], which has shown good performance for CPU code, although not evaluated on mobile devices. Their performance is achieved by generating code with cross-layer fusion, which is problematic for modeling exact layer conditions. On mobile platforms, MXNet only supports CPU-based libraries. <table> <thead> <tr> <th>Parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Input tile size</td> <td>5 × 5</td> </tr> <tr> <td>Number of kernels per workgroup</td> <td>4</td> </tr> <tr> <td>Number of windows per thread</td> <td>3</td> </tr> <tr> <td>Sequentially processed input elements</td> <td>144</td> </tr> <tr> <td>Optimizational padding size</td> <td>11</td> </tr> <tr> <td>Vector size</td> <td>4</td> </tr> <tr> <td>Unrolling</td> <td>No</td> </tr> <tr> <td>Coalescing</td> <td>Yes</td> </tr> </tbody> </table> Table 4. Best parameters found for layer 7 of VGG-16 Other works have recently explored efficient implementations of direct convolution [2, 9, 25] but are limited in the scope of their available target platforms. In particular, [9, 25] are reliant on the availability of SIMD instructions and are specific to CPUs. Tsai et al. [22] rely on efficient implementation of OpenCL kernels to reduce memory requirement of GEMM by avoiding replication of input patches, however this is not fast enough for mobile devices. There have also been several developments at the algorithmic level allowing for fast approximations to convolution [16, 23], or computationally cheaper substitutions [6, 7, 12]. In this work we have not considered such approximate methods, but leave them for future exploration. 9 Conclusions Most machine-learning frameworks rely on GEMM to implement convolutions due to the availability of high-performance implementations on most parallel devices. The downside is that GEMM requires an order of magnitude more memory than direct convolution, which can restrict the application of neural networks for memory limited embedded devices. Direct convolution is an attractive alternative, however, hardware-vendor provided implementations are often an order of magnitude slower than their GEMM counterpart. This paper has shown how we automatically generate high performance direct convolution with Lift for the ARM Mali GPU. This approach leads to a ×10 speedup and ×3.6 memory saving over the tuned ARM Compute Library implementations. Acknowledgments This work was supported by the Engineering and Physical Sciences Research Council (grant EP/L01503X/1), EPSRC Centre for Doctoral Training in Pervasive Parallelism at the University of Edinburgh, School of Informatics. References
{"Source-Url": "https://www.pure.ed.ac.uk/ws/files/133591458/Automatic_Generation_of_Specialized_MOGERS_DOA20012020_AFV.pdf", "len_cl100k_base": 10044, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40694, "total-output-tokens": 12593, "length": "2e13", "weborganizer": {"__label__adult": 0.0006046295166015625, "__label__art_design": 0.000942707061767578, "__label__crime_law": 0.0005331039428710938, "__label__education_jobs": 0.0006732940673828125, "__label__entertainment": 0.0001856088638305664, "__label__fashion_beauty": 0.0003066062927246094, "__label__finance_business": 0.0003039836883544922, "__label__food_dining": 0.0004813671112060547, "__label__games": 0.0011682510375976562, "__label__hardware": 0.005397796630859375, "__label__health": 0.0009832382202148438, "__label__history": 0.00051116943359375, "__label__home_hobbies": 0.00017118453979492188, "__label__industrial": 0.000927448272705078, "__label__literature": 0.00032401084899902344, "__label__politics": 0.0005030632019042969, "__label__religion": 0.0009832382202148438, "__label__science_tech": 0.289306640625, "__label__social_life": 0.00010913610458374023, "__label__software": 0.01031494140625, "__label__software_dev": 0.68359375, "__label__sports_fitness": 0.0004813671112060547, "__label__transportation": 0.001003265380859375, "__label__travel": 0.0002932548522949219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47808, 0.03475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47808, 0.40768]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47808, 0.83082]], "google_gemma-3-12b-it_contains_pii": [[0, 1488, false], [1488, 6413, null], [6413, 11163, null], [11163, 16545, null], [16545, 19984, null], [19984, 25023, null], [25023, 27689, null], [27689, 31801, null], [31801, 37585, null], [37585, 40382, null], [40382, 47808, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1488, true], [1488, 6413, null], [6413, 11163, null], [11163, 16545, null], [16545, 19984, null], [19984, 25023, null], [25023, 27689, null], [27689, 31801, null], [31801, 37585, null], [37585, 40382, null], [40382, 47808, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47808, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47808, null]], "pdf_page_numbers": [[0, 1488, 1], [1488, 6413, 2], [6413, 11163, 3], [11163, 16545, 4], [16545, 19984, 5], [19984, 25023, 6], [25023, 27689, 7], [27689, 31801, 8], [31801, 37585, 9], [37585, 40382, 10], [40382, 47808, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47808, 0.09485]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
bada1608bd4d1a4c2d10193d013a491eb86fa471
[REMOVED]
{"len_cl100k_base": 11089, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 41412, "total-output-tokens": 14522, "length": "2e13", "weborganizer": {"__label__adult": 0.00046181678771972656, "__label__art_design": 0.00035572052001953125, "__label__crime_law": 0.00030350685119628906, "__label__education_jobs": 0.0005831718444824219, "__label__entertainment": 6.836652755737305e-05, "__label__fashion_beauty": 0.00017940998077392578, "__label__finance_business": 0.0001398324966430664, "__label__food_dining": 0.00031757354736328125, "__label__games": 0.0005507469177246094, "__label__hardware": 0.0008435249328613281, "__label__health": 0.0004649162292480469, "__label__history": 0.0001703500747680664, "__label__home_hobbies": 9.554624557495116e-05, "__label__industrial": 0.0003082752227783203, "__label__literature": 0.0002753734588623047, "__label__politics": 0.0002067089080810547, "__label__religion": 0.00045943260192871094, "__label__science_tech": 0.01047515869140625, "__label__social_life": 9.119510650634766e-05, "__label__software": 0.003877639770507813, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.0003445148468017578, "__label__transportation": 0.0005087852478027344, "__label__travel": 0.00018966197967529297}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59651, 0.03452]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59651, 0.30864]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59651, 0.90207]], "google_gemma-3-12b-it_contains_pii": [[0, 4707, false], [4707, 11459, null], [11459, 16517, null], [16517, 21785, null], [21785, 26276, null], [26276, 31037, null], [31037, 34718, null], [34718, 37725, null], [37725, 44779, null], [44779, 51131, null], [51131, 59152, null], [59152, 59651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4707, true], [4707, 11459, null], [11459, 16517, null], [16517, 21785, null], [21785, 26276, null], [26276, 31037, null], [31037, 34718, null], [34718, 37725, null], [37725, 44779, null], [44779, 51131, null], [51131, 59152, null], [59152, 59651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59651, null]], "pdf_page_numbers": [[0, 4707, 1], [4707, 11459, 2], [11459, 16517, 3], [16517, 21785, 4], [21785, 26276, 5], [26276, 31037, 6], [31037, 34718, 7], [34718, 37725, 8], [37725, 44779, 9], [44779, 51131, 10], [51131, 59152, 11], [59152, 59651, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59651, 0.04783]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
80335fa7051483cfed9552ee62d809c609304d89
1. Introduction Big data analytics (BDA) applications use Machine Learning (ML) algorithms to extract valuable insights from large, fast, and heterogeneous data. These BDA applications require complex software design, development, and deployment to deal with big data characteristics: volume, variety, and velocity (3Vs), to maintain expected performance levels. Specifically, BDA processing takes advantage of cutting-edge technologies and infrastructures that enable distributed stream computing. But the complexity involved in BDA application development frequently leads to delayed deployments (Chen et al., 2016) and hinders performance monitoring (e.g. throughput or latency) (Ranjan, 2014). Regarding the 3Vs, a BDA solution can be constrained to different performance metrics. For instance, real-time stream analytics applications require low latency and flexible scalability based on data volume fluctuation. On the other hand, heavy workloads, which imply batch processing over big data, demand high scalability and fault tolerance to achieve a particular deadline. One of the key goals of software architecture is the design of the system's structures and their relationships to achieve expected quality properties. The development of BDA solutions involves three knowledge domains: business, analytics, and technology. In the business domain, business experts have to define business goals and quality scenarios (QS) to drive analytics projects. In the analytics domain, these business goals are translated into specific analytic tasks by data scientists. Finally, in the technology domain, software architects make decisions in terms of tactics, patterns, and deployment considerations keeping in mind quality attributes. Stakeholders from different domains face heterogeneous concerns and different abstraction levels. Due to the lack of techniques, and tools to enable articulation and integration of such domains, BDA solutions development presents a high cost and error-prone transition between development and production environments (Chen et al., 2016; Wegener and Rüping, 2010). Though there is a growing interest of companies in big data adoption, real deployments are still scarce (“Deployment Gap” phenomenon) (Chen et al., 2017). In the same vein, previous surveys (Rexer, 2013; Rexer et al., 2016; Castellanos et al., 2019) have reported low deployment frequency and delayed deployment procedures caused by analytics model translation, lack of tools’ interoperability and stakeholders’ communication. These pitfalls could be the result of the traditional approach of BDA development where the data scientist produces the models as source code implemented using machine learning-oriented tools which are focused on analytics perspectives within a controlled environment (data lab). On the other hand, software architects have to translate these models into software products which usually implies rewriting code to obtain productive software components deployed on specific IT infrastructures. This paper proposes ACCORDANT (An executeable architecture mOdEl fo r big Data ANalyTics), a DevOps and Domain-Specific Model (DSM) approach to develop, deploy, and monitor BDA solutions bridging the gap between analytics and IT domains. ACCORDANT allows to design BDA applications using QS, functional, and deployment views. A QS specifies a quality attribute requirement for a software artifact to support design and quality assessment. Functional view defines the architectural elements that deliver the application’s functionality. Deployment view describes how software is assigned to hardware-processing and communication elements. Our deployment strategy incorporates containerization since it offers consistent modularity to facilitate portability, continuous integration, and delivery. ACCORDANT is validated using four use cases from different domains by designing functional and deployment models, and assessing performance QS. This validation aims to reduce the time of design, deployment, and QS monitoring of BDA solutions. These use cases range from public transportation and avionics safety to weather forecasting, and they include distributed batch, micro-batch, and stream processing. Our results indicate improvements in design and (re)deployment times to achieve the expected performance metrics. In summary, the contributions of this paper are as follows: - A DSM framework to formalize and accelerate iteratively the development and deployment of BDA solutions by specifying architectural functional, and deployment views aligned to QS. - Three integrated domain-specific languages (DSLs) to specify architectural inputs, component-connector models, and deployments, thus accelerating BDA deployment cycle. - A containerization approach to promote automation delivery and performance metrics monitoring for BDA applications aligned to QS. - The evaluation of this proposal applied to four use cases from diverse domains, and using different deployment strategies and QS. The rest of this paper is organized as follows. In Section 2, we describe the background on DSM, big data analytics, and DevOps. Section 3 reviews related work. Section 4 presents our methodology and proposal overview. Section 5 presents the use cases for experimentation. Section 6 illustrates the steps followed to validate this proposal. Section 7 presents and discusses the obtained results. Finally, Section 8 summarizes the conclusions and future work. 2. Background This section describes the core concepts in which this proposal is supported: domain-specific modeling, software architecture, big data analytics, and DevOps. 2.1. Domain-Specific Modeling (DSM) and software architecture Domain-Specific Modeling enables the software to be modular and resilient to changes through the separation of concerns (SoC) principle by specifying technology-agnostic concepts, relationships, and constraints within the domain. An important advantage of DSM is the close mapping problem and solution domains to provide code generation. Moreover, DSM can speed up and optimize the code generated for the specific platform improving productivity. In order to enable code generation, the domain model requires to be narrow, and it is constrained by a language specification, the metamodel. Furthermore, due to the narrow metamodel's scope, the models can be read, checked, validated, and interpreted to generate specific implementations. Regarding representations, DSM can be expressed in graphical, textual, or mixed notation according to the domain context. It is possible to embed multiple views or aspects (for example, analytics, software components, and deployment) using different representations that share elements or mappings. An architecture description language enables architects to express high-level system structure by describing its coarse-grained components and connections among them. These descriptions are contained in architectural views to address different concerns, and these views are built based on collection of patterns, templates, and conventions called Viewpoints (Rozanski and Woods, 2005). The architectural design is driven by quality scenarios and primary functional requirements through a systematic design method, such as the Attribute-Driven Design method (ADD, Cervantes and Kazman, 2016). ADD starts identifying inputs: QS, functional requirements, and constraints. In each ADD iteration, a design goal is defined from these inputs, and the selection of architectural structures, tactics, patterns, and their application described across views, aims at achieving such goal. A pattern is a standard, known and reusable solution to a common problem in software architecture. Tactics are design primitives to achieve a response for particular quality attributes. Previous studies have collected both patterns (Erl et al., 2016; Marz and Warren, 2015) and tactics (Gorton and Klein, 2014; Ullah and Babar, 2019) to be applied in the BDA domain. 2.2. Big data analytics In BDA context, data processing models aim at specific application requirements: batch to process large stored datasets all at once with high performance, and stream processing for an unbounded data flow in (near) real-time. Due to the complexity of deploying and operating BDA solutions, integrating a myriad of technologies, complex analytics models, and distributed infrastructure, some research has been done to tackle such complexity by raising the level of abstraction (Gribaudo et al., 2017; Guerriero et al., 2016; Huang et al., 2015). Due to the wide range of BDA technologies, portability plays a key role to deploy, operate, and evolve BDA applications, and this is where portable standards appear such as Predictive Model Markup Language (PMML)¹ or Portable Format for Analytics (PFA).² PMML is the de facto standard proposed by the Data Mining Group that enables portability of analytics models through neutral-technology XML format. PMML allows specifying a set of machine learning models and data transformations along with their metadata. 2.3. DevOps and IaC According to Bass et al. (2015), DevOps is a set of practices aiming to reduce the time from software development to production environment, ensuring high quality. DevOps includes activities as deploy, operate, and monitor applications, with the goals of improve deployment frequency and speed up the time ² [http://dmg.org/pfa/](http://dmg.org/pfa/) 3. Related work Several works have proposed frameworks to build and deploy BDA applications. We review and compare some of the most relevant works, that comprise building blocks to construct and deploy BDA pipelines. Indeed, some works have tackled DSM to describe functional and deployment viewpoints involving DevOps practices. We summarize and compare the related work reviewed in Table 1, addressing the identified problem and our vision of using separation of concerns (SoC), domain-specific modeling and DevOps to deal with the deployment gap. Table 1 details in each column some features we identify in the related work as follows. SoC is a key design principle for us, since the knowledge domains involved in BDA (business, analytics, and IT) have to be tackled from different perspectives (i.e. viewpoints). In terms of analytics domain, cross-industry (CI), and technology-neutral models (TNM) promote applicability, and BDA portability respectively. Regarding software architecture concepts, QS specification (QS), functional (FV), and deployment (DV) views allow us to describe orthogonal concerns such as quality scenarios, components-and-connector, and deployment models. Architectural tactics (AT) are design decisions that influence the control of a QS response. A target-technology assignment (TTA) complements DSM approaches by supporting a predefined technologies set (P) or extensible code generators (C). Finally, considering the DevOps practices, deployment specification column (DS) defines if only a number of instances (I) per component or a whole deployment diagram (D) can be described. Additional practices that facilitate the deployment and operation processes are considered: continuous deployment (CD), QS monitoring (QSM), and self-adaptation (SA). Some works have presented DSM to model analytics functions, however, they do not tackle architecture concepts and deployment considerations because they are only focused on functional definitions. Lechevalier et al. (2015) introduce a DSM framework for predictive analytics of manufacturing data using artificial neural networks to generate analytics models. Sujeeth et al. (2011) OptIML, a DSL for machine learning which describes analytics functions using a statistical model which cover a subset of ML algorithms, this analytics functions are analyzed and optimized before the code generation. CloverDX (0000) is a commercial tool to design data transformations and analytics workflows in a visible way integrating external APIs, and including parallel processing in multiple nodes. CloverDX’s functional view includes readers, processors, and writers for a predefined set of technologies, but deployment view is not available and distributed processing must be defined with specific parallel nodes in the functional view, which prevents to use the same functional definition in different deployment strategies. Finally, technology-neutral models, performance scenario specifications, and architectural tactics are not supported. In contrast, we found another group of studies interested in infrastructure concerns of BDA applications leaving aside their functional components. Gribaudo et al. (2017) propose a modeling framework based on graph-based language to evaluate the system’s performance of running applications which follow the lambda architecture pattern. This modeling framework allows users to define stream, batch, storage, and computation nodes along with performance indices to be simulated and evaluated, but neither functional BDA application nor real infrastructure provision are provided as a result. Huang et al. (2015) introduce a model to design, deploy, and configure Hadoop clusters through architecture metamodel and rules, which describe BDA infrastructure and deploy automation. Their work is focused on design, deployment, and evaluation of BDA technology infrastructures. However, it leaves out functional analytics models to get an integrated BDA solution. QualiMaster (Alrifai et al., 2014) focuses on the processing of online data streams for real-time applications such as the risk analysis of financial markets regarding metrics of time behavior and resource utilization. The aim of QualiMaster is to maximize the throughput of a given processing pipeline. Similarly, our proposal generates software for BDA applications, but taking as input the analytics specification of a predictive model, and the performance metrics to be achieved. Unlike Qualimaster, our proposal is technology-neutral and cross-industry which enables a more widespread application. Fastscore (Open Data Group) is a commercial framework to design and deploy analytics models. Analytics components are conventionally developed using a determined programming language or using a PMML file, and once imported to the platform, they can be connected to data inputs and outputs. Quality scenarios cannot be specified, but performance metrics can be visualized. Deployment is realized through engines (containers) where models are executed, and the deployment design is limited to engine replication factor to increase the concurrency of analytics models. SpringXD (Anandan et al., 2015) is a unified, distributed, and extensible system for data ingestion, analytics, processing, and export to simplify BDA development and deployment. In SpringXD, modules are data processing units of one of three types: source, processor, or sink, and they can be connected using messaging abstractions called message bus to build BDA pipelines. Modules run over a cluster of containers which can be replicated to a fixed number and monitored to observe performance behavior, although these metrics are not application-oriented, but infrastructure-oriented (e.g. CPU and memory use). Similar to our approach, analytics processor can be defined through PMML models, but target technologies are limited to a set of predefined options. DICE project in Guerriero et al. (2016) and Artac et al. (2018) presents a DSM offering big data design which comprises data, computation, technology-frameworks, and deployment concepts to design and deploy data-intensive applications. DICE proposes a model-driven engineering approach to develop application models which are automatically transformed into IaC. In addition, DICE includes quality of service requirements associated to elements within the application, which are analogous to QS. Perez-Palacin et al. (2019) presented a profile to enable performance and reliability assessment. DICE supports configuration management, service provisioning, and application deployment, but technology-neutral models and architectural tactics are not considered which could hinder portability and design decision tracing. Due to its focus, DICE requires design at very detailed level, specifying different constructs regarding target technologies, but in our proposal, the technology-specific generators transform functional and deployment artifacts to code. To summarize, the related work approaches reviewed tackle the BDA applications design, but they are not concerned about... deployment architectural decisions. Specifically, only four proposals follow the SoC principle (Alrifai et al., 2014; Open Data Group; Anandan et al., 2015; Guerriero et al., 2016), and among them, only Qualimaster and DICE (Guerriero et al., 2016) offer a deployment viewpoint. From the architecture perspective, tactics and QS specifications are scarcely ever considered. Based on these findings, we argue that our proposal aims to bridge such gaps. 4. ACCORDANT: A DevOps and domain-specific model approach This proposal aims at offering a high-level approach to design BDA solutions starting from architectural artifacts, instead of source code. Specifically, we propose ACCORDANT (An ex-eCutable ArChitecture mOdell for Big Data Analytics) to deal with functional, infrastructure and QS requirements. Our proposal comprises: a design and deployment process, and a DSM framework to support such process. This paper extends metamodel proposed in Castellanos et al. (2018) by aligning ACCORDANT process to ADD, and including architectural inputs, containerization and serverless deployments in DV. Fig. 1 depicts the ACCORDANT’s process, which adapts and integrates an architecture design method (ADD) and analytics methodologies. The steps performed using ACCORDANT modeling framework are framed in solid lines, while the steps made with external tools are represented by dotted lines. ACCORDANT process is iterative and, it is composed of seven steps: the business user defines (1.1) business goals and (1.2) QS which will guide the next steps. (2) The data scientist develops data transformations, build and evaluates analytics models. The resulting analytics models are exported as PMML files. (3) Architect design the software architecture using ACCORDANT Metamodel in terms of Functional Viewpoint (FV) and Deployment Viewpoint (DV), DV model makes use of PMML models to specify the software behavior. (4) FV and DV models are interwoven to obtain an integrated model. (5) Code generation of software and infrastructure is performed from integrated models. (6) The code generated in the previous step is executed to provision infrastructure and install the software. (7) QS are monitored in operation to be validated, and design adjustments can be made to achieve QS, if necessary. 4.1. Architectural inputs According to architecture design methods such as Attribute-Driven Design (ADD) (Wojcik et al., 2006), architecture design is driven by predefined quality scenarios (QS) which must be achieved through design decisions compiled in well-known catalogs of architectural patterns and tactics. Both QS and tactics are inputs of the architecture design, therefore we include these initial building blocks in the ACCORDANT metamodel along with other concepts defined in ADD. Fig. 2 details the main input building blocks grouped by the architectural input package (InputPackage) which contains the elements required to start the architectural design: Quality Scenario (QScenario), Analyzed QS (AnalyzedQS), SensitivityPoint and Tactic. A QScenario determines a quality attribute requirement (i.e. latency, availability, scalability, etc.) for a specific Artifact. Thus, for instance, a QScenario could be defined as “latency <= 3 seconds for an artifact X”, where artifact X corresponds to a software component or connector. A QS is analyzed through a AnalyzedQS, and sensitivity points. A SensitivityPoint is a property of a decision (a set of elements and their relationships within an architectural view) that is critical for achieving the QS, and that such decision is the application of a tactic to a specific application context. Finally, Tactic elements synthesize BDA tactics found in Gorton and Klein (2014) and Ullah and Babar (2019) to be applied in an architecture instance, e.g.: dynamic resource allocation, health monitoring, parallel processing, feature selection, etc. Once QScenarios, AnalyzedQS, and SensitivityPoints are defined in the step 1.2 of ACCORDANT process, the software architecture is designed in step 3 and expressed on the views instantiating tactics in a concrete application. These decisions are associated via SensitivityPoints, and they will be evaluated against the initial QS to validated whether the architecture is achieving its goal. 4.2. Functional viewpoint (FV) FV allows us to design analytics pipelines in terms of ingestion, preparation, analysis, and exporting building blocks. FV specifies functional requirements of the analytics solution, and the constructs are described in a technology-neutral way as detailed in the metamodel depicted in Fig. 3. FV is expressed in a component-connector structure. Sensitivity points, from architectural inputs, can be associated to components and connectors to represent where architectural decisions have impact regarding the QS. Component metaclasses are specialized in Ingestors, Transformers, Estimators, and Sinks. Estimators and Transformers are the software component realizations of PMML data model and data transformer respectively, and the PMML file defines their behavior. A Component exposes required and provided Port. Connectors metaclasses transfer data or control flow among components through an input or output Roles. A set of connector types are defined based on the connector’s classification proposed by Taylor et al. (2010): Procedure Call, Event, Stream, Adapter, Distributor, and Arbitrator. A Procedure Call connector models the flow control and communication through invocations. Similarly, an Event connector affects the control flow and provides data transfer, but it is subject to the occurrence of events to notify all interested parts. A Stream connector is used to perform transfer of large amounts of data that is continuously generated. Adapters enable interaction between components that have not designed to interoperate providing conversion features. Distributor connectors identify interaction paths and communication routing. An Arbitrator streamlines system operation and resolves conflicts thus offering intermediary services. Fig. 1. ACCORDANT process overview. Fig. 2. Excerpt of architecture inputs metamodel. Fig. 3. Excerpt of functional viewpoint of ACCORDANT metamodel. 4.3. Deployment viewpoint (DV) The Deployment viewpoint integrates DevOps practices including containerization, IaC, and serverless computing. The DV specifies how software artifacts (components and connectors) are deployed on a set of computation nodes. The main metaclasses are detailed in Fig. 4. DV metamodel comprises Pod, ExposedPort, and Deployment metaclasses to operationalize BDA applications in a specific technology. It is noteworthy that a FV model can be deployed in different DV models either to use a different strategy, or to test the fulfillment of predefined QS scenarios. DV contains Devices, Services, Deployments, serverless environments (ServerlessEnv), and Artifacts. Sensitivity points can be assigned to Deployments and Artifacts to map critical architectural decisions in the DV. A Device is a worker machine (physical or virtual) on which the Pods are deployed. A Pod is a group of one or more execution environments (ExecEnvironment) which can share storage and network. An ExecEnvironment represents a container with a Docker image, and specific resources requirements (CPU, memory). A Deployment specifies the desired state for a Pod’s group and its deployment strategy, including the number of replicas. Services and ExposedPorts define the policies, addresses, ports, and protocols by which to access to Pods from outside the cluster network. A ServerlessEnv element describes a computing environment in which a cloud provider runs the server, and dynamically manages the allocation of machine resources, as opposition to ExecEnvironment where physical resources have to be defined and managed. Artifacts correspond to executable or deployable representations of functional elements (i.e. components and connectors from functional view) which can be deployed on either execution or serverless environments. Once PMML, FV and DV models are designed and integrated, code generation takes place by means of model-to-text transformations. Code generation is twofold: software and infrastructure (IaC) code. On the software side, each component and connector is assigned to a specific technology regarding its constraints specified in the model (processing model, ML algorithm, delivery type, sync type, etc.). Such assignment enables us to generate code for target technology restricted to these constraints. For instance, near real-time analytics requires stream or micro-batch processing offered by Apache Storm or Spark respectively, and Event connectors such as Apache Kafka or RabbitMQ. Regarding the QS monitoring, code generators include specific machinery to log metrics at an application level. It allows us to collect specific-QS from a high-level abstraction, saving the cost of adding code for logging metrics for each application and target technology. On the IaC side, DV model is transformed into Kubernetes’ configuration files (in YAML format) used to create and configure infrastructure over Kubernetes cluster. Kubernetes files contain Nodes, Pods, Deployments, and Services which are executed through Kubectl. In the last step, the performance metrics of the BDA application are gathered to be compared to initial QS and evaluate the fulfillment of quality requirements. In this step, the architect has to check the outputs, and to make decisions in the architectural views, if QS is not achieved. This process can take several iterations, and this is the whole cycle that we expect to accelerate and using ACCORDANT. 5. Evaluation with four BDA use cases Our experimentation aims to compare development and deployment time for each iteration using ACCORDANT and other two frameworks reviewed in Section 3: FastScore and SpringXD. We chose these frameworks because they are the closest to our approach, and they support portable analytics models (PMML or PFA). We validated our proposal in different domains through four use cases: UC1) Transport delay prediction, UC2) Near mid-air collision detection, UC3) Near mid-air collision risk analysis, and UC4) El Nino/Southern Oscillation cycles. Table 2 summarizes the use cases, domains, processing models, and quality attributes. These use cases are applied to analytics models, they also illustrate BDA facets as streaming and micro-batch to deal with the velocity aspect, and batch processing is focused on volume, in terms of data size and computation complexity. Fig. 5 details the component-connector model for each use case to illustrate the functional building blocks, and their composition as BDA pipelines. The ACCORDANT specification of these use cases is publicly available,\footnote{http://github.com/kmilo-castellanos/accordant-usecases} and the use cases description will be presented below. Table 2 Use Cases. <table> <thead> <tr> <th>Use case</th> <th>Description</th> <th>Domain</th> <th>Analytics model</th> <th>Processing model</th> <th>QS metric</th> </tr> </thead> <tbody> <tr> <td>UC1</td> <td>Transport delay prediction</td> <td>Transportation</td> <td>Regression tree</td> <td>Stream</td> <td>Update time, latency, Deadline</td> </tr> <tr> <td>UC2</td> <td>NMAC risk analysis</td> <td>Avionics</td> <td>K-means</td> <td>Batch</td> <td>Latency</td> </tr> <tr> <td>UC3</td> <td>NMAC detection</td> <td>Avionics</td> <td>Decision tree</td> <td>Micro-batch</td> <td></td> </tr> <tr> <td>UC4</td> <td>El Nino/Southern oscillation</td> <td>Weather</td> <td>Polynomial regression</td> <td>Batch</td> <td></td> </tr> </tbody> </table> Fig. 5. Component diagrams of Use Cases. 5.1. Use case 1 (UC1) The first use case was presented in Castellanos et al. (2018), and it deals with delay prediction of public transportation in Vancouver. Bus trips data is collected in real-time from Vancouver Transport Operator, and it contains bus stops, routes, and time. A regression tree model to predict bus delays (in seconds) is built, evaluated, and exported to PMML. The pipeline, described in Fig. 5a, starts with an ingestor component which receives HTTP request and put it into an event connector (message broker), then the request message is consumed by the estimator to predict the delay time, and queue it, to be stored into a No-SQL database (hierarchical). The PMML model is deployed into productive environment as a delay predictor service, using OpenScoring, and Kafka message broker, and MongoDB writer as target technologies. The QS were defined in terms of performance and modifiability attributes. The QS specifies that users make 1000 requests to delay prediction service under operations without load, and the responses must have an average latency lower than 2 s. Second QS states that when data scientist produces a new version of the predictive model (new PMML file), it must be updated at runtime within 10 s. 5.2. Use case 2 (UC2) UC2 was applied in aviation safety to detect near mid-air collisions (NMAC) on different air space ranges with different deployment models while performance QS are monitored. This use case is described in Fig. 5(b), and it was presented in Castellanos et al. (2019). NMAC detection comprises a pairwise comparison of flights: \( C_2^N \), where \( n \) is the number of flights. Each comparison requires to calculate distance and time based on location, speed and heading to determine the risk level of NMAC, which implies an intensive computation of quadratic time complexity. Eight-hours of data were stored in a distributed file system to be loaded by JSON reader component. This ingestor calls NMAC detector which computes the alert level. Once an alerting level is calculated for each flight pair, the results are sent to the clustering estimator to be associated with a specific cluster. NMACs are stored back in the file system. To compare different data size magnitudes, we collected flight data for three air space ranges in nautical miles (nmi): 2 nmi, 20 nmi, 200 nmi, and 1500 nmi around John F. Kennedy Airport. These ranges represent different application scopes to attend various demand levels: local, metropolitan, and regional areas. The largest dataset (1500 nmi) is 1.4 GB of JSON files. This use case did not have real-time requirements due to its heavy workload nature, and therefore a performance QS for deadlines lower than one hour was defined. 5.3. Use case 3 (UC3) UC3 is a real-time application to detect NMAC within an air space range, and its architecture is described in Fig. 5(c). The ingestor component consumed data through direct REST service. Flight data was pushed in a message queue to be consumed by the NMAC detector component which performed the potential collision detection to be finally stored in a relational DB through a message broker connector. It is worth mentioning that the NMAC estimator of UC2 and UC3 are the same, since its inputs, outputs, and behavior are identical, so we can reuse such functional component definition, in spite of its deployment can be different according to the defined metric constraints. Given the near real-time nature of this application, latency is the critical quality attribute, and we evaluated this metric in two ranges of air space around John F. Kennedy Airport: 2 nmi and 200 nmi, which demand different computation resources. 5.4. Use case 4 (UC4) In this last use case, we used a public available data and PMML model (polynomial regression) of El Nino/Southern Oscillation (ENSO)5 to implement a batch oriented pipeline, see Fig. 5(d). The El Nino/Southern Oscillation (ENSO) cycle, was the strongest of the century which produced many problems throughout the world affecting South and North America countries with destructive flooding in some areas and strong drought in other areas. Data for this use case contains oceanographic and surface meteorological readings (geolocation, humidity, surface winds, sea surface temperatures, and subsurface temperatures) are taken from a series of buoys positioned throughout the equatorial Pacific. This data is expected to help with the understanding and prediction of ENSO cycles. We read the historic data from 1980 to 1998 (178,080 records) using a CSV reader (ingestor) component, which sends the data to the ENSO predictor component. ENSO predictor is an estimator component that forecasts air temperature, and stores the prediction in a distributed file system. The QS defined for UC4 was a deadline for batch processing lower than 30 min. 5.5. Development, deployment time, and gain factor To compare ACCORDANT, SpringXD, and FastScore, we measured the time invested in development and deployment phases for each use case. Development phase involves design and development of the functional components and connectors in a specific technology. Deployment phase comprises the design and provisioning of the technology infrastructure, the installation of software artifacts developed in the previous phase, and the monitoring of the solution regarding the predefined QS. These phases are performed iteratively, since in each iteration some improvements and refinements are done until the QS are achieved. Therefore, we measure the time invested in each iteration, and also we calculate the gain factor $GF(uc, f)$, as a metric to estimate the cumulative average of time reduction ratio for a use case $uc$, using framework $f$ over $I$ iterations. $GF(uc, f)$ is defined as follows: $$GF(uc, f) = \frac{1}{I} \sum_{i=1}^{I} \frac{time_{spent}(uc, f)_i - time_{spent}(uc, f)_{i+1}}{time_{spent}(uc, f)_i}$$ We define the gain factor as a form to measure the incremental improving of using a high level abstractions to modify or refine an application until achieve an expected QS. The time for each use case, phase, and iteration was collected from two development teams which learnt and used the three frameworks to develop and deploy two use cases each one, while they were recording the time spent. The development and deployment process using ACCORDANT will be illustrated with UC4 in the next Section. 6. Experimentation To design, develop, and deploy the four use cases, we followed ACCORDANT process detailed previously in Fig. 1. For the sake of brevity, this section details the step-by-step implementation of UC4 as an example, more details about the other use cases can be found in Castellanos et al. (2018, 2019). The ACCORDANT projects are available in a public repository6 as well as use cases and results.7 6.1. Definition of quality scenarios QS are defined regarding the use case’s quality requirements. In UC4, a scheduled job to estimate ENSO cycles for ten years of data is processed in batch. In this vein, Fig. 6 details architectural inputs of UC4 expressed using the ACCORDANT’s input package DSL. The predictor component is required to have a deadline lower than 1 h in the QS UC4_QS1. Analyzing this QS, a sensitivity point (UC4_SP1) is identified to achieve the deadline metric by applying two tactics: introduce concurrency and increase available resources. These tactics will be materialized in the software architecture design. 6.2. Development of data transformations and analytics model The analytics model is trained and evaluated by the data scientist outside the ACCORDANT framework, and the resulting models were exported to PMML file to be loaded in the ACCORDANT functional model. In this case, the polynomial regression model of ENSO is downloaded and used. Fig. 7 describes the structure of the PMML, detailing some data fields, mining fields, and regression coefficients. This PMML file will be embedded in the functional model in the next step. 6.3. Design of software architecture — Functional view FV models were designed using ACCORDANT Functional DSL to specify a component-connector structure for each use case. Two iterations of functional model were designed for UC4, and the last iteration is depicted in Fig. 8a. Since architectural inputs are required in this design, this package is imported using the keyword use inputPackage. The functional model specifies three components: (CSVReader::Ingestor, ENSOPredictor::Estimator, and HDFSWriter::Sink), and two connectors: procedure calls CallEnso::ProcCall and CallExport::ProcCall which connect the components through ports. The components also include some properties such as connections and formats. Additionally, ENSOPredictor uses batch processing model, it has associated the PMML "El-NinoPolReg.pmml", obtained in the previous step, to provide the predictive behavior. The sensitivity point UC4_SP1 aligns the architectural input (QS and tactics explained in Section 6.1) to ENSOPredictor. It means that ENSOPredictor becomes part of the introduce concurrency tactic realization that will be translated into a distributed processing model which has to be supported by the target technology. 6.4. Design of software architecture — Deployment view The deployment view models were designed using ACCORDANT DSL for each use case defined in the functional models. The UC4 deployment model had three iterations, and Fig. 8b details the last version. Given that DV is based on input package and functional view, they are imported by means of keyword use inputPackage and functionalView respectively. This view includes the artifacts that map connectors and components from functional view (e.g. ENSOPredictor) to deployable elements (e.g. ENSOArtifact). Devices and deployments were specified to support the computation requirements. For instance, deployments of Spark master and worker nodes (e.g. SparkWorkerDep) details the number of replicas, pods, and execution environments (ExcEnv). ExcEnv defines the docker image, CPU and memory requirements, ports, and commands along with the artifacts to be deployed (ENSOArtifact). Finally, the sensitivity point UC4_SP1 associates the deployment SparkWorkerDep to performance QS, and the tactic increase available resources (see Section 6.1) to support distributed computing over a Spark cluster. 6.5. Integration and code generation Once the FV and DV models were designed and integrated, the code generation produced both the functional code and IaC. On the one hand, the functional code is a Spark driver program as detailed in Listing 1, where ENSOPredictor component implements the PMML model in Spark technology. The Spark program defines data input and output from the Data Dictionary and Mining Schema embedded in PMML specifications. On the other hand, infrastructure code is the configuration files which specify the provision and configuration policies of Kubernetes cluster. Listing 2 shows an example of generated Kubernetes files. The whole code of use cases is publicly available in the accordant-usecases repository. Listing 1: Generated Java Code of EnsoEstimator Component for Spark Streaming ```java SparkSession sparkSession = new SparkSession(sc.sc()); InputStream pmmlFile = new URL("file:////path/ElNinoPolReg.pmml"); EvaluatorBuilder b = new LoadingModelEvaluatorBuilder().load(pmmlFile); Evaluator eval = builder.build(); TransformerBuilder pmmlTransformerBuilder = new TransformerBuilder(evaluator) .withTargetCols().exploded(true); List<StructField> fields = new ArrayList<StructField>(); fields.add(DataTypes.createStructField("latitude", DataTypes.DoubleType, true)); fields.add(DataTypes.createStructField("s_s_temp", DataTypes.DoubleType, true)); StructType schema = DataTypes.createStructType(fields); Transformer pmmlTransformer = pmmlTransformerBuilder.build(); Dataset<Row> inputDs = sparkSession.read().schema(schema).csv("data/ElNino.csv"); TransformerBuilder tb = new TransformerBuilder(eval); Transformer transformer = tb.build(); ``` Listing 2: Generated YAML Code from Deployment Specification for Kubernetes (Extract) ```yaml apiVersion: apps/v1 kind: Deployment metadata: name: SparkWorkerDep spec: replicas: 3 spec: containers: - name: SparkWEnv image: ramhiser/spark:2.0.1 command: [/spark worker] ports: - containerPort: 8081 resources: requests: cpu: 0.3 ``` 6.6. Code execution Kubernetes code was executed on the AWS cloud using Amazon Elastic Container Service for Kubernetes (Amazon EKS) and Elastic Compute Cloud (EC2). After that, the software code was installed over the cluster to operationalize the end-to-end solution. 6.7. Solution monitoring Performance metrics for each use case in operation were collected and validated against QS defined in Section 6.1. As a result, different deployment configurations were designed, deployed, and monitored in each iteration to observe the fulfillment of QS. 7. Results and discussion Revisiting the related work reviewed in Section 3, we have shown in practice how ACCORDANT bridge the gap among analytics, software architecture, and DevOps. As presented in Table 1, ACCORDANT follows the SoC principle by means of three different languages to specify domain concerns. Analytics models in ACCORDANT are cross-industry and technology-neutral. In terms of software architecture, ACCORDANT supports QS specifications aligned to FV and DV, and these models can be specified independently but in an integrated way. Architectural tactics enable software architects to describe and communicate their decisions. Code generators offer flexibility and impact positively the development and deployment efficiency. Respecting DevOps practice, deployment models allow us to design deployment diagrams, not limited to a number of instances. Continuous deployment is supported via IaC and code generation, and QS-monitoring is implemented by injecting logging code in the generated applications. Finally, self-adaptation is not covered in the current version of ACCORDANT. To summarize, though a large variety of component-connector metamodels have been previously proposed, as far as we know, our contribution resides in specialize a component-connector metamodel in the BDA domain, and integrate it with architectural inputs and deployment models to offer a holistic design. Additionally, this section presents and discusses the experimental results obtained during the iterative development and deployment phases of UC1, UC2, UC3, and UC4. 7.1. Development and deployment time Fig. 9 depicts the development and deployment time (in hours) accumulated for all iterations per use case. It is worth noting that development time using ACCORDANT is higher (between 23% and 47%) compared to SpringXD and Fastscore, but the deployment time is significantly lower (between 50% and 81%) using ACCORDANT. The higher development time can be explained by the time required in ACCORDANT to specify architectural inputs, and many details in the FV. In addition, the current version of the ACCORDANT prototype generates functional code for estimators, but ingestors, sinks, and connectors still require manual. Although ACCORDANT required more effort in the development phase, this effort is rewarded during the deployment phase, where infrastructure and QS-monitoring are provided automatically aligned to Inputs and FV, unlike other approaches. This benefit can be observed on the deployment time across all use cases using ACCORDANT, because they are more similar than the other approaches. The biggest time differences arise from UC2 that demands more time because it includes a more complex pipeline involving two estimators: NMAC detector and K-means clustering. Another interesting finding was that the high-level reuse of previous architectural decisions (tactics) reduced the time of development as shown the marked decreasing between use cases, and the growing gain factor among iterations detailed in Fig. 9. These results suggest that ACCORDANT is most suitable for application involving multiple iterations, or in subsequent applications where reusing architectural decisions, models, and metrics can reduce development times. 7.2. Gain factor comparison The gain factor metric presented in Eq. (1) in Section 5.5 was calculated for each use case and iteration of development and deployment phases as depicted in Fig. 10. ACCORDANT’s gain factor was higher for all use cases, in the development phase (Fig. 10a), what suggests that the high-level abstractions promote the highest reduction of development time among consecutive iterations. The highest gain factor was 0.46 in the UC3, it means reducing in 46% the development time between consecutive iterations. The greatest gain factor difference over the other approaches was 0.13 in the UC3. Regarding the deployment gain factor (Fig. 10b), ACCORDANT also exhibited the highest gain factor, on an even higher proportion, up to 0.75 in UC4. This means each deployment iteration reduces the time in 75% compared to the previous one. Similar to the deployment time in the previous section, we argue that the gain factor in the deployment phase is greater because of the IaC generation is not present in the other approaches. 8. Conclusions We have presented a DevOps and DSM proposal to design, deploy, and monitor BDA solutions. We have positioned the ACCORDANT contributions within the related work. Four use cases from different domains were used to evaluate our approach against two BDA frameworks. As a result, ACCORDANT has shown to facilitate and accelerate iterative development and deployment phases by offering an integrated and high-level design BDA applications. The greatest time reduction was reported in the deployment phase, achieving up to 81% compared to other approaches. In contrast, the development times offered by ACCORDANT were greater. Despite the longer development time, deployment time is significantly reduced thanks to the QS, FV, and DV alignment. ACCORDANT’s gain factor was higher, which implies a higher reduction time in each iteration. In contrast, some limitations have emerged from experimentation. The development phase is slower than the other approaches for multiple reasons. The current version of the ACCORDANT’s prototype requires supplementary manual coding what increases the development time. ACCORDANT also requires more design details and architectural inputs. These additional definitions are rewarded in consecutive iterations, so ACCORDANT is most suitable for application involving multiple iterations. Finally, our approach takes advantage of reusing architectural decisions and models, hence, first-time or one-time applications may not be benefited from our proposal. As future work, the performance metrics collected along with FV and DV models could allow us to propose a performance model to predict the expected application-specific behavior based on the functional model, deployment model, and target technology to recommend optimal architecture configuration for a defined QS. Furthermore, we could include features to simulate and verify correctness properties over the models such as technology selection in the FV model and resource allocation in the DV model. Given that PMML provides a model verification schema to validate results accuracy, a future extension could incorporate automated model verification. This approach has been used for deploying analytics components and connectors on virtual machines over cloud infrastructure, but different paradigms such as serverless or fog computing may open new research lines. Acknowledgments This research is supported by Fulbright Colombia and the Center of Excellence and Appropriation in Big Data and Data Analytics (CAOBA), supported by the Ministry of Information Technologies and Telecommunications of the Republic of Colombia (MinTIC) through the Colombian Administrative Department. Appendix A. Abbreviations See Table A.3. References CloverDX. CloverDX Data Integration. URL: https://www.cloverdx.com/. Open Data Group, FastScore. URL: https://www.opendatagroup.com/fastscore. Cristian Castellanos is a Ph.D. candidate at the Department of Systems and Computing Engineering, Universidad de Los Andes, Colombia. His research tackles the deployment challenges of big data analytics solutions using model-driven architecture motivated by real-life experiences. He holds an M.Sc. in Systems and Computing Engineering (cum laude) from Universidad de Los Andes focused on enterprise architecture alignment between business and information domain model-driven engineering and ontology matching. Since 2016, he has been working at Alianza Caoba, a public–private initiative to gather Colombian government, industry and student community around applied research on big data analytics. In 2019, he was awarded a Fulbright scholarship for a research stay at RPI NY to study the deployment of big data analytics applications in avionics. Dr. Carlos A. Varela received his B.S. with honors, M.S., and Ph.D. in Computer Science at the University of Illinois at Urbana-Champaign. Dr. Varela is Associate Editor and Information Director of the ACM Computing Surveys journal, and has served as Guest Editor of the Scientific Programming journal. Dr. Varela is a recipient of several research grants including the NSF CAREER award, two IBM SUR awards, and two IBM Innovation awards. His current research interests include web-based and internet-based computing, middleware for adaptive distributed systems, concurrent programming models and languages, and software development environments and tools. For more information on Prof. Varela’s group’s research, please visit the Worldwide Computing Lab at http://wcl.cs.rpi.edu/. Dr. Dario Correal is an Associate Professor of the Department of Systems and Computing at the University of Georgia. Dr. Correal is a Systems Engineer from the Universidad de Los Andes, with a Master’s Degree in Systems Engineering and a Doctorate in Engineering from the Universidad de Los Andes. His specific research interests are Software Architecture, Solution Architecture, and Self-Adaptable Architectures. His research carried out within the TICsW groups. Dr. Correal is a recipient of awards including third place of ACM SIGPLAN Student Research Competition - OOPSLA’06, and third place of Student’s Competition – I2LOR’ 06. In the industry, he has over ten years of experience in software development and management of development teams.
{"Source-Url": "http://wcl.cs.rpi.edu/papers/accordant_2020.pdf", "len_cl100k_base": 10216, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 36359, "total-output-tokens": 12514, "length": "2e13", "weborganizer": {"__label__adult": 0.00034546852111816406, "__label__art_design": 0.0005412101745605469, "__label__crime_law": 0.00026226043701171875, "__label__education_jobs": 0.0013914108276367188, "__label__entertainment": 7.778406143188477e-05, "__label__fashion_beauty": 0.00017082691192626953, "__label__finance_business": 0.0002872943878173828, "__label__food_dining": 0.00031256675720214844, "__label__games": 0.0005412101745605469, "__label__hardware": 0.000926494598388672, "__label__health": 0.0004067420959472656, "__label__history": 0.0003075599670410156, "__label__home_hobbies": 8.970499038696289e-05, "__label__industrial": 0.0004198551177978515, "__label__literature": 0.0002732276916503906, "__label__politics": 0.0002956390380859375, "__label__religion": 0.00045680999755859375, "__label__science_tech": 0.0272064208984375, "__label__social_life": 9.60230827331543e-05, "__label__software": 0.005252838134765625, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0003032684326171875, "__label__transportation": 0.0006875991821289062, "__label__travel": 0.000217437744140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55688, 0.03008]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55688, 0.27217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55688, 0.88691]], "google_gemma-3-12b-it_contains_pii": [[0, 2571, false], [2571, 9552, null], [9552, 16642, null], [16642, 22715, null], [22715, 22867, null], [22867, 27577, null], [27577, 30402, null], [30402, 36912, null], [36912, 40475, null], [40475, 44368, null], [44368, 48441, null], [48441, 55688, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2571, true], [2571, 9552, null], [9552, 16642, null], [16642, 22715, null], [22715, 22867, null], [22867, 27577, null], [27577, 30402, null], [30402, 36912, null], [36912, 40475, null], [40475, 44368, null], [44368, 48441, null], [48441, 55688, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55688, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55688, null]], "pdf_page_numbers": [[0, 2571, 1], [2571, 9552, 2], [9552, 16642, 3], [16642, 22715, 4], [22715, 22867, 5], [22867, 27577, 6], [27577, 30402, 7], [30402, 36912, 8], [36912, 40475, 9], [40475, 44368, 10], [44368, 48441, 11], [48441, 55688, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55688, 0.03371]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
3edf2109b64dbd670804949954be012abac73d4a
[REMOVED]
{"Source-Url": "http://www.cs.ru.nl/~rverdult/Prevent_Session_Hijacking_by_Binding_the_Session_to_the_Cryptographic_Network_Credentials-NORDSEC_2013.pdf", "len_cl100k_base": 9976, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 40674, "total-output-tokens": 11174, "length": "2e13", "weborganizer": {"__label__adult": 0.0005469322204589844, "__label__art_design": 0.0006098747253417969, "__label__crime_law": 0.004261016845703125, "__label__education_jobs": 0.003963470458984375, "__label__entertainment": 0.00016558170318603516, "__label__fashion_beauty": 0.0002593994140625, "__label__finance_business": 0.0006356239318847656, "__label__food_dining": 0.0004284381866455078, "__label__games": 0.001598358154296875, "__label__hardware": 0.00295257568359375, "__label__health": 0.000912189483642578, "__label__history": 0.00047659873962402344, "__label__home_hobbies": 0.00017344951629638672, "__label__industrial": 0.0008416175842285156, "__label__literature": 0.0004732608795166016, "__label__politics": 0.00047969818115234375, "__label__religion": 0.00044155120849609375, "__label__science_tech": 0.24169921875, "__label__social_life": 0.000171661376953125, "__label__software": 0.06787109375, "__label__software_dev": 0.669921875, "__label__sports_fitness": 0.00033545494079589844, "__label__transportation": 0.0006656646728515625, "__label__travel": 0.00023114681243896484}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48344, 0.0323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48344, 0.21795]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48344, 0.91643]], "google_gemma-3-12b-it_contains_pii": [[0, 2629, false], [2629, 6266, null], [6266, 9379, null], [9379, 12182, null], [12182, 15324, null], [15324, 17815, null], [17815, 21497, null], [21497, 23939, null], [23939, 26956, null], [26956, 29729, null], [29729, 33083, null], [33083, 35152, null], [35152, 38667, null], [38667, 41897, null], [41897, 45397, null], [45397, 48344, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2629, true], [2629, 6266, null], [6266, 9379, null], [9379, 12182, null], [12182, 15324, null], [15324, 17815, null], [17815, 21497, null], [21497, 23939, null], [23939, 26956, null], [26956, 29729, null], [29729, 33083, null], [33083, 35152, null], [35152, 38667, null], [38667, 41897, null], [41897, 45397, null], [45397, 48344, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48344, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48344, null]], "pdf_page_numbers": [[0, 2629, 1], [2629, 6266, 2], [6266, 9379, 3], [9379, 12182, 4], [12182, 15324, 5], [15324, 17815, 6], [17815, 21497, 7], [21497, 23939, 8], [23939, 26956, 9], [26956, 29729, 10], [29729, 33083, 11], [33083, 35152, 12], [35152, 38667, 13], [38667, 41897, 14], [41897, 45397, 15], [45397, 48344, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48344, 0.265]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
147d8851e721ffa1c7a8651db4602f23b5b15ab2
Language & Attention Steve Zhu • Language Models are Few-Shot Learners • Attention Is All You Need What is a Language Model Let’s go bucks and beat Michigan. Input Let’s go bucks and beat Michigan. Or: Which word is the mostly likely to come next Language Model: A model that can generate language in a probabilistic way. You can train it on any sort of text data. Common Crawl Dataset - Filtered for better quality - A crawl of the entire Internet GPT-3: An autoregressive language model Language Models are Few-Shot Learners Language Models are Few-Shot Learners - CRAZYY size - Transformer Model <table> <thead> <tr> <th>Model Name</th> <th>( n_{\text{params}} )</th> <th>( n_{\text{layers}} )</th> <th>( d_{\text{model}} )</th> <th>( n_{\text{heads}} )</th> <th>( d_{\text{head}} )</th> <th>( \text{Batch Size} )</th> <th>( \text{Learning Rate} )</th> </tr> </thead> <tbody> <tr> <td>GPT-3 Small</td> <td>125M</td> <td>12</td> <td>768</td> <td>12</td> <td>64</td> <td>0.5M</td> <td>( 6.0 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 Medium</td> <td>350M</td> <td>24</td> <td>1024</td> <td>16</td> <td>64</td> <td>0.5M</td> <td>( 3.0 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 Large</td> <td>760M</td> <td>24</td> <td>1536</td> <td>16</td> <td>96</td> <td>0.5M</td> <td>( 2.5 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 XL</td> <td>1.3B</td> <td>24</td> <td>2048</td> <td>24</td> <td>128</td> <td>1M</td> <td>( 2.0 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 2.7B</td> <td>2.7B</td> <td>32</td> <td>2560</td> <td>32</td> <td>80</td> <td>1M</td> <td>( 1.6 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 6.7B</td> <td>6.7B</td> <td>32</td> <td>4096</td> <td>32</td> <td>128</td> <td>2M</td> <td>( 1.2 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 13B</td> <td>13.0B</td> <td>40</td> <td>5140</td> <td>40</td> <td>128</td> <td>2M</td> <td>( 1.0 \times 10^{-4} )</td> </tr> <tr> <td>GPT-3 175B or “GPT-3”</td> <td>175.0B</td> <td>96</td> <td>12288</td> <td>96</td> <td>128</td> <td>3.2M</td> <td>( 0.6 \times 10^{-4} )</td> </tr> </tbody> </table> Transformer Model: • Input Context (we already have) Let’s go bucks and beat Michigan. Transformer Model: - Input Context (we already have) Let’s go bucks and beat Michigan. Attention Mechanism Attention Mechanism: • A way where information is routed in between the different tokens. As it goes up the layer, the information is routed around, and the model can make various inferences and at the end, the model is supposed to come up next word. Let’s go bucks and beat Michigan. How did they train it Traditional Fine-tuning (Not used for GPT-3) **Fine-tuning** The model is trained via repeated gradient updates using a large corpus of example tasks. 1. sea otter => loutre de mer - gradient update - example #1 2. peppermint => menthe poivrée - gradient update - example #2 3. plush giraffe => giraffe peluche - gradient update - example #N 4. cheese => ........................................... - prompt **BERT:** 1. Pretrain 2. Fine-tuning - Train Set - Test Set True Zero-shot Zero-shot The model predicts the answer given only a natural language description of the task. No gradient updates are performed. 1 Translate English to French: ⇐ task description 2 cheese => ......................... ⇐ prompt One-shot (example comes from training set but not train on it) **One-shot** In addition to the task description, the model sees a single example of the task. No gradient updates are performed. 1. **task description**: Translate English to French: 2. **example**: sea otter => loutre de mer 3. **prompt**: cheese => ...................................... Few-shot In addition to the task description, the model sees a few examples of the task. No gradient updates are performed. 1. Translate English to French: 2. sea otter => loutre de mer 3. peppermint => menthe poivrée 4. plush giraffe => girafe peluche 5. cheese => (task description) (examples) (prompt) Example: Give the Chinese letters for the numbers One -> 一 Two -> 二 Three -> 三 Four -> 四 Results Language Models are Few-Shot Learners With the parameters number goes up, the Validation Loss goes down. (parameters in log scale) You can make improvements by scaling up your model on language model. Individual Tasks: Alice was friends with Bob. Alice went to visit her friend _______. -> Bob George bought some baseball equipment, a ball, a glove, and a __________. -> Question Answering: Open-Domain means that the model can go and look at some Wikipedia page. <table> <thead> <tr> <th>Setting</th> <th>Natural QS</th> <th>Web QS</th> <th>TriviaQA</th> </tr> </thead> <tbody> <tr> <td>RAG (Fine-tuned, Open-Domain) [LPP+20]</td> <td>44.5</td> <td>45.5</td> <td>68.0</td> </tr> <tr> <td>T5-11B+SSM (Fine-tuned, Closed-Book) [RRS20]</td> <td>36.6</td> <td>44.7</td> <td>60.5</td> </tr> <tr> <td>T5-11B (Fine-tuned, Closed-Book)</td> <td>34.5</td> <td>37.4</td> <td>50.1</td> </tr> <tr> <td>GPT-3 Zero-Shot</td> <td>14.6</td> <td>14.4</td> <td>64.3</td> </tr> <tr> <td>GPT-3 One-Shot</td> <td>23.0</td> <td>25.3</td> <td>68.0</td> </tr> <tr> <td>GPT-3 Few-Shot</td> <td>29.9</td> <td>41.5</td> <td>71.2</td> </tr> </tbody> </table> Translation: <table> <thead> <tr> <th>Setting</th> <th>En→Fr</th> <th>Fr→En</th> <th>En→De</th> <th>De→En</th> <th>En→Ro</th> <th>Ro→En</th> </tr> </thead> <tbody> <tr> <td>SOTA (Supervised)</td> <td>45.6&lt;sup&gt;a&lt;/sup&gt;</td> <td>35.0&lt;sup&gt;b&lt;/sup&gt;</td> <td>41.2&lt;sup&gt;c&lt;/sup&gt;</td> <td>40.2&lt;sup&gt;d&lt;/sup&gt;</td> <td>38.5&lt;sup&gt;e&lt;/sup&gt;</td> <td>39.9&lt;sup&gt;e&lt;/sup&gt;</td> </tr> <tr> <td>XLM [LC19]</td> <td>33.4</td> <td>33.3</td> <td>26.4</td> <td>34.3</td> <td>33.3</td> <td>31.8</td> </tr> <tr> <td>MASS [STQ+19]</td> <td>37.5</td> <td>34.9</td> <td>28.3</td> <td>35.2</td> <td>35.2</td> <td>33.1</td> </tr> <tr> <td>mBART [LGG+20]</td> <td>-</td> <td>-</td> <td>29.8</td> <td>34.0</td> <td>35.0</td> <td>30.5</td> </tr> <tr> <td>GPT-3 Zero-Shot</td> <td>25.2</td> <td>21.2</td> <td>24.6</td> <td>27.2</td> <td>14.1</td> <td>19.9</td> </tr> <tr> <td>GPT-3 One-Shot</td> <td>28.3</td> <td>33.7</td> <td>26.2</td> <td>30.4</td> <td>20.6</td> <td>38.6</td> </tr> <tr> <td>GPT-3 Few-Shot</td> <td>32.6</td> <td>39.2</td> <td>29.7</td> <td>40.6</td> <td>21.0</td> <td>39.5</td> </tr> </tbody> </table> Winograd: The Winograd Schemas Challenge is a classical task in NLP that involves determining which word a pronoun refers to, when the pronoun is grammatically ambiguous but semantically unambiguous to a human. Language Models are Few-Shot Learners Physical Q&A: - ARC (dataset of multiple-choice questions collected from 3rd to 9th grade) - Asks common sense questions about how the physical world works and is intended as a probe of grounded understanding of the world. Language Models are Few-Shot Learners ### PhysicalQA <table> <thead> <tr> <th>Setting</th> <th>PIQA</th> <th>ARC (Easy)</th> <th>ARC (Challenge)</th> <th>OpenBookQA</th> </tr> </thead> <tbody> <tr> <td>Fine-tuned SOTA</td> <td>79.4</td> <td>92.0[KKS+20]</td> <td>78.5[KKS+20]</td> <td>87.2[KKS+20]</td> </tr> <tr> <td>GPT-3 Zero-Shot</td> <td><strong>80.5*</strong></td> <td>68.8</td> <td>51.4</td> <td>57.6</td> </tr> <tr> <td>GPT-3 One-Shot</td> <td><strong>80.5*</strong></td> <td>71.2</td> <td>53.2</td> <td>58.8</td> </tr> <tr> <td>GPT-3 Few-Shot</td> <td><strong>82.8*</strong></td> <td>70.1</td> <td>51.5</td> <td>65.4</td> </tr> </tbody> </table> Reading Comprehension: Reading Comprehension: - abstractive, multiple choice, and span-based answer formats in both dialog and single question settings. <table> <thead> <tr> <th></th> <th>SuperGLUE Average</th> <th>BoolQ Accuracy</th> <th>CB Accuracy</th> <th>CB F1</th> <th>COPA Accuracy</th> <th>RTE Accuracy</th> </tr> </thead> <tbody> <tr> <td>Fine-tuned SOTA</td> <td>89.0</td> <td>91.0</td> <td>96.9</td> <td>93.9</td> <td>94.8</td> <td>92.5</td> </tr> <tr> <td>Fine-tuned BERT-Large</td> <td>69.0</td> <td>77.4</td> <td>83.6</td> <td>75.7</td> <td>70.6</td> <td>71.7</td> </tr> <tr> <td>GPT-3 Few-Shot</td> <td>71.8</td> <td>76.4</td> <td>75.6</td> <td>52.0</td> <td>92.0</td> <td>69.0</td> </tr> </tbody> </table> <table> <thead> <tr> <th></th> <th>WiC Accuracy</th> <th>WSC Accuracy</th> <th>MultiRC Accuracy</th> <th>MultiRC F1a</th> <th>ReCoRD Accuracy</th> <th>ReCoRD F1</th> </tr> </thead> <tbody> <tr> <td>Fine-tuned SOTA</td> <td>76.1</td> <td>93.8</td> <td>62.3</td> <td>88.2</td> <td>92.5</td> <td>93.3</td> </tr> <tr> <td>Fine-tuned BERT-Large</td> <td>69.6</td> <td>64.6</td> <td>24.1</td> <td>70.0</td> <td>71.3</td> <td>72.0</td> </tr> <tr> <td>GPT-3 Few-Shot</td> <td>49.4</td> <td>80.1</td> <td>30.5</td> <td>75.4</td> <td>90.2</td> <td>91.1</td> </tr> </tbody> </table> _BoolQ Dataset: { "question": "is france the same timezone as the uk", "passage": "At the Liberation of France in the summer of 1944, Metropolitan France kept GMT+2 as it was the time then used by the Allies (British Double Summer Time). In the winter of 1944--1945, Metropolitan France switched to GMT+1, same as in the United Kingdom, and switched again to GMT+2 in April 1945 like its British ally. In September 1945, Metropolitan France returned to GMT+1 (pre-war summer time), which the British had already done in July 1945. Metropolitan France was officially scheduled to return to GMT+0 on November 18, 1945 (the British returned to GMT+0 in on October 7, 1945), but the French government canceled the decision on November 5, 1945, and GMT+1 has since then remained the official time of Metropolitan France. "answer": false, "title": "Time in France", } COPA Dataset: 1. Examples Premise: The man broke his toe. What was the CAUSE of this? Alternative 1: He got a hole in his sock. Alternative 2: He dropped a hammer on his foot. Premise: I tipped the bottle. What happened as a RESULT? Alternative 1: The liquid in the bottle froze. Alternative 2: The liquid in the bottle poured out. Premise: I knocked on my neighbor’s door. What happened as a RESULT? Alternative 1: My neighbor invited me in. Alternative 2: My neighbor left his house. NLI (Natural Language Inference): • Also poorly performed • Concerns the ability to understand the relationship between two sentences. Synthetic and Qualitative Tasks: - Arithmetic - Word Scrambling and Manipulation Tasks - SAT Analogies - News Article Generation - Learning and Using Novel Words - Correcting English Grammar Arithmetic: - 2 digit addition (2D+) – The model is asked to add two integers sampled uniformly from [0; 100), phrased in the form of a question, e.g. “Q: What is 48 plus 76? A: 124.” - 2 digit subtraction (2D-) – The model is asked to subtract two integers sampled uniformly from [0; 100); the answer may be negative. Example: “Q: What is 34 minus 53? A: -19”. - 3 digit addition (3D+) – Same as 2 digit addition, except numbers are uniformly sampled from [0; 1000). - 3 digit subtraction (3D-) – Same as 2 digit subtraction, except numbers are uniformly sampled from [0; 1000). - 4 digit addition (4D+) – Same as 3 digit addition, except uniformly sampled from [0; 10000). - 4 digit subtraction (4D-) – Same as 3 digit subtraction, except uniformly sampled from [0; 10000). - 5 digit addition (5D+) – Same as 3 digit addition, except uniformly sampled from [0; 100000). - 5 digit subtraction (5D-) – Same as 3 digit subtraction, except uniformly sampled from [0; 100000). - 2 digit multiplication (2Dx) – The model is asked to multiply two integers sampled uniformly from [0; 100), e.g. “Q: What is 24 times 42? A: 1008”. - One-digit composite (1DC) – The model is asked to perform a composite operation on three 1 digit numbers, with parentheses around the last two. For example, “Q: What is 6+(4*8)? A: 38”. The three 1 digit numbers are selected uniformly on [0; 10) and the operations are selected uniformly from f+,-,*g. Language Models are Few-Shot Learners ![Graph showing the performance of language models in arithmetic tasks](image) <table> <thead> <tr> <th>Setting</th> <th>2D+</th> <th>2D-</th> <th>3D+</th> <th>3D-</th> <th>4D+</th> <th>4D-</th> <th>5D+</th> <th>5D-</th> <th>2Dx</th> <th>1DC</th> </tr> </thead> <tbody> <tr> <td>GPT-3 Zero-shot</td> <td>76.9</td> <td>58.0</td> <td>34.2</td> <td>48.3</td> <td>4.0</td> <td>7.5</td> <td>0.7</td> <td>0.8</td> <td>19.8</td> <td>9.8</td> </tr> <tr> <td>GPT-3 One-shot</td> <td>99.6</td> <td>86.4</td> <td>65.5</td> <td>78.7</td> <td>14.0</td> <td>14.0</td> <td>3.5</td> <td>3.8</td> <td>27.4</td> <td>14.3</td> </tr> <tr> <td>GPT-3 Few-shot</td> <td>100.0</td> <td>98.9</td> <td>80.4</td> <td>94.2</td> <td>25.5</td> <td>26.8</td> <td>9.3</td> <td>9.9</td> <td>29.2</td> <td>21.3</td> </tr> </tbody> </table> Word Scrambling and Manipulation Tasks • Cycle letters in word (CL) – The model is given a word with its letters cycled, then the “=” symbol, and is expected to generate the original word. For example, it might be given “lyinevitab” and should output “inevitably”. • Anagrams of all but first and last characters (A1) – The model is given a word where every letter except the first and last have been scrambled randomly, and must output the original word. Example: criroptuon = corruption. • Anagrams of all but first and last 2 characters (A2) – The model is given a word where every letter except the first 2 and last 2 have been scrambled randomly, and must recover the original word. Example: opoepnnt !opponent. • Random insertion in word (RI) – A random punctuation or space character is inserted between each letter of a word, and the model must output the original word. Example: s.u!c/c!e.s s i/o/n = succession. • Reversed words (RW) – The model is given a word spelled backwards, and must output the original word. Example: stcejbo!objects. Language Models are Few-Shot Learners $K = 100$. ![Graph showing wordscramble accuracy vs parameters in LM (Billions)] SAT Analogies: • Example: “audacious is to boldness as (a) sanctimonious is to hypocrisy, (b) anonymous is to identity, (c) remorseful is to misdeed, (d) deleterious is to result, (e) impressionable is to temptation”. • On this task GPT-3 achieves 65.2% in the few-shot setting, 59.1% in the one-shot setting, and 53.7% in the zero-shot setting, whereas the average score among college applicants was 57% News Article Generation: • They gave it a beginning of a news article and they let human decide whether the news article is written by a machine or by a human • By contrast, mean human accuracy at detecting articles that were produced by the 175B parameter model was barely above chance at 52%. Human abilities to detect model generated text appear to decrease as model size increases: there appears to be a trend towards chance accuracy with model size, and human detection of GPT-3 is close to chance Title: United Methodists Agree to Historic Split Subtitle: Those who oppose gay marriage will form their own denomination Article: After two days of intense debate, the United Methodist Church has agreed to a historic split - one that is expected to end in the creation of a new denomination, one that will be "theologically and socially conservative," according to The Washington Post. The majority of delegates attending the church’s annual General Conference in May voted to strengthen a ban on the ordination of LGBTQ clergy and to write new rules that will "discipline" clergy who officiate at same-sex weddings. But those who opposed these measures have a new plan: They say they will form a separate denomination by 2020, calling their church the Christian Methodist denomination. The Post notes that the denomination, which claims 12.5 million members, was in the early 20th century the "largest Protestant denomination in the U.S.,” but that it has been shrinking in recent decades. The new split will be the second in the church’s history. The first occurred in 1968, when roughly 10 percent of the denomination left to form the Evangelical United Brethren Church. The Post notes that the proposed split "comes at a critical time for the church, which has been losing members for years," which has been "pushed toward the brink of a schism over the role of LGBTQ people in the church." Gay marriage is not the only issue that has divided the church. In 2016, the denomination was split over ordination of transgender clergy, with the North Pacific regional conference voting to ban them from serving as clergy, and the South Pacific regional conference voting to allow them. Learning and Using Novel Words: A "Burringo" is a car with very fast acceleration. An example of a sentence that uses the word Burringo is: In our garage we have a Burringo that my father drives to work every day. A "Gigamuru" is a type of Japanese musical instrument. An example of a sentence that uses the word Gigamuru is: I have a Gigamuru that my uncle gave me as a gift. I love to play it at home. To "screeg" something is to swing a sword at it. An example of a sentence that uses the word screeg is: We screeged at each other for several minutes and then we went outside and ate ice cream. ## Correcting English Grammar: <table> <thead> <tr> <th>Poor English input</th> <th>Good English output</th> </tr> </thead> <tbody> <tr> <td>I eated the purple berries.</td> <td>I ate the purple berries.</td> </tr> <tr> <td>Thank you for picking me as your designer. I’d appreciate it.</td> <td>Thank you for choosing me as your designer. I appreciate it.</td> </tr> <tr> <td>The mentioned changes have done. or I did the alteration that you requested. or I changed things you wanted and did the modifications.</td> <td>The requested changes have been made. or I made the alteration that you requested. or I changed things you wanted and made the modifications.</td> </tr> <tr> <td>I’d be more than happy to work with you in another project.</td> <td>I’d be more than happy to work with you on another project.</td> </tr> <tr> <td>Please provide me with a short brief of the design you’re looking for and that’d be nice if you could share some examples or projects you did before.</td> <td>Please provide me with a brief description of the design you’re looking for and that would be nice if you could share some examples or projects you have done before.</td> </tr> <tr> <td>The patient was died.</td> <td>The patient died.</td> </tr> <tr> <td>We think that Leslie likes ourselves.</td> <td>We think that Leslie likes us.</td> </tr> <tr> <td>Janet broke Bill on the finger.</td> <td>Janet broke Bill’s finger.</td> </tr> <tr> <td>Mary arranged for, in St. Louis, John to rent a house cheap.</td> <td>Mary arranged for John to rent a house in St. Louis.</td> </tr> <tr> <td>Today I have went to the store to to buys some many bottle of water.</td> <td>Today I went to the store to buy some bottles of water.</td> </tr> <tr> <td>I have tried to hit ball with bat, but my swing is has miss.</td> <td>I tried to hit the ball with the bat, but my swing missed.</td> </tr> </tbody> </table> Limitations • Language Models are Few-Shot Learners • Attention Is All You Need Traditional RNN The cat eats the mouse. Traditional RNN The cat eats the mouse. The - Word Vector - NN - Hidden States The - Word Vector - NN - Hidden States Eats - Word Vector - NN - Hidden States The - Word Vector - NN - Hidden States Mouse - Word Vector - NN - Hidden States Traditional RNN The cat eats the mouse. Die Katze Firsst Die Maus Transformer Attention Is All You Need Attention Is All You Need INPUT Je suis étudiant ENCODERS OUTPUT I am a student DECODERS Attention$(Q, K, V) = \text{softmax}(\frac{QK^T}{\sqrt{d_k}})V \quad (1)$ $Z$ $\text{FFN}(Z) = \max(0, ZW_1 + b_1)W_2 + b_2 \quad (2)$ Attention Is All You Need 1. Self-Attention: Relationship between current translation and previous 2. Encoder-Decoder Attention: Relationship between current translation and feature vectors Bringing the Tensors into the Picture Each word is embedded into a vector of size 512. We'll represent those vectors with these simple boxes. Attention Is All You Need Encoding Attention Is All You Need Self-Attention The animal didn’t cross the street because it was too tired Attention Is All You Need <table> <thead> <tr> <th>Input</th> <th>Thinking</th> <th>Machines</th> </tr> </thead> <tbody> <tr> <td>Embedding</td> <td></td> <td></td> </tr> <tr> <td>Queries</td> <td></td> <td></td> </tr> <tr> <td>Keys</td> <td></td> <td></td> </tr> <tr> <td>Values</td> <td></td> <td></td> </tr> <tr> <td>Score</td> <td></td> <td></td> </tr> </tbody> </table> - Embedding: - $x_1$ (green) - $x_2$ (green) - Queries: - $q_1$ (purple) - $q_2$ (purple) - Keys: - $k_1$ (orange) - $k_2$ (orange) - Values: - $v_1$ (blue) - $v_2$ (blue) - Score: - $q_1 \cdot k_1 = 112$ - $q_1 \cdot k_2 = 96$ Attention Is All You Need Input Embedding Queries Keys Values Score Divide by 8 ($\sqrt{d_k}$) Softmax Thinking x₁ q₁ k₁ v₁ q₁ \cdot k₁ = 112 Machines x₂ q₂ k₂ v₂ q₁ \cdot k₂ = 96 14 12 0.88 0.12 Attention Is All You Need Input Embedding Queries Keys Values Score Divide by $8 \left( \sqrt{d_k} \right)$ Softmax Softmax $\times$ Value Sum Thinking Machines $x_1$ $q_1$ $k_1$ $v_1$ $x_2$ $q_2$ $k_2$ $v_2$ $q_1 \cdot k_1 = 112$ $q_1 \cdot k_2 = 96$ 14 0.88 12 0.12 $v_1$ $z_1$ $v_2$ $z_2$ To Sum up: 1. Embedding of each word 2. Get Q, K, V based on embedded vector 3. Calculate a score 4. Divide the scores by $\sqrt{d_k}$ 5. Pass the result through a softmax operation 6. Multiply each value vector by the softmax score 7. Sum up the weighted value vectors Attention Is All You Need The self-attention calculation in matrix form Multi-Head Attention Attention Is All You Need Calculating attention separately in eight different attention heads Thinking Machines ATTENTION HEAD #0 Z_0 ATTENTION HEAD #1 Z_1 ATTENTION HEAD #7 Z_7 1) Concatenate all the attention heads \[ \begin{bmatrix} Z_0 & Z_1 & Z_2 & Z_3 & Z_4 & Z_5 & Z_6 & Z_7 \end{bmatrix} \] 2) Multiply with a weight matrix \( W^o \) that was trained jointly with the model \[ X \cdot W^o \] 3) The result would be the \( Z \) matrix that captures information from all the attention heads. We can send this forward to the FFNN \[ Z \] 1) This is our input sentence 2) We embed each word 3) Split into 8 heads. We multiply $X$ or $R$ with weight matrices 4) Calculate attention using the resulting $Q/K/V$ matrices 5) Concatenate the resulting $Z$ matrices, then multiply with weight matrix $W^o$ to produce the output of the layer * In all encoders other than $#0$, we don't need embedding. We start directly with the output of the encoder right below this one. Attention Is All You Need Layer: 5 Attention: Input - Input The_ animal_ didn_ '_' t_ cross_ the_ street_ because_ it_ was_ too_ tire d_ Encoder-Decoder Attention - In decoder, Transformer block has one more encoder-decoder attention than encoder. In encoder-decoder attention, Q is from the last output of decoder, K,V are from the output of encoder. - Because in machine translation, decoding is a sequential operation, which means when decoding the kth feature vector, we can only know the (k-1)th and results from before. In the paper, the authors named the multi-head attention under this condition “masked multi-head attention”. Positional Encoding Attention Is All You Need ENCODER #0 ENCODER #1 DECODER #0 DECODER #1 EMBEDDING WITH TIME SIGNAL POSITIONAL ENCODING EMBEDDINGS INPUT x1 = x2 = x3 = t1 + t2 + t3 + x1 x2 x3 INPUT Je suis étudiant \[ PE(pos, 2i) = \sin\left(\frac{pos}{10000^{\frac{2i}{d_{model}}}}\right) \] (3) \[ PE(pos, 2i + 1) = \cos\left(\frac{pos}{10000^{\frac{2i}{d_{model}}}}\right) \] (4) Pos represents the location of the word and i represents the dimension. We can find code from Google open source algorithm get_timing_signal_1d(). Result <table> <thead> <tr> <th>Model</th> <th>BLEU</th> <th>Training Cost (FLOPs)</th> </tr> </thead> <tbody> <tr> <td></td> <td>EN-DE</td> <td>EN-FR</td> </tr> <tr> <td>ByteNet [18]</td> <td>23.75</td> <td>39.2</td> </tr> <tr> <td>Deep-Att + PosUnk [39]</td> <td>24.6</td> <td>39.92</td> </tr> <tr> <td>GNMT + RL [38]</td> <td>25.16</td> <td>40.46</td> </tr> <tr> <td>ConvS2S [9]</td> <td>26.03</td> <td>40.56</td> </tr> <tr> <td>MoE [32]</td> <td></td> <td></td> </tr> <tr> <td>Deep-Att + PosUnk Ensemble [39]</td> <td>26.30</td> <td>41.16</td> </tr> <tr> <td>GNMT + RL Ensemble [38]</td> <td></td> <td></td> </tr> <tr> <td>ConvS2S Ensemble [9]</td> <td>26.36</td> <td>41.29</td> </tr> <tr> <td>Transformer (base model)</td> <td>27.3</td> <td>38.1</td> </tr> <tr> <td>Transformer (big)</td> <td>28.4</td> <td>41.8</td> </tr> <tr> <td></td> <td>$N$</td> <td>$d_{\text{model}}$</td> </tr> <tr> <td>---</td> <td>----</td> <td>----------------</td> </tr> <tr> <td>base</td> <td>6</td> <td>512</td> </tr> <tr> <td>(A)</td> <td></td> <td></td> </tr> <tr> <td></td> <td>1</td> <td>512</td> </tr> <tr> <td></td> <td>4</td> <td>32</td> </tr> <tr> <td></td> <td>32</td> <td>16</td> </tr> <tr> <td>(B)</td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>(C)</td> <td>2</td> <td>256</td> </tr> <tr> <td></td> <td>4</td> <td>1024</td> </tr> <tr> <td></td> <td>8</td> <td>4096</td> </tr> <tr> <td></td> <td></td> <td>1024</td> </tr> <tr> <td></td> <td></td> <td>4096</td> </tr> <tr> <td>(D)</td> <td></td> <td>0.0</td> </tr> <tr> <td></td> <td></td> <td>0.2</td> </tr> <tr> <td></td> <td></td> <td>0.0</td> </tr> <tr> <td></td> <td></td> <td>0.2</td> </tr> <tr> <td></td> <td></td> <td>0.0</td> </tr> <tr> <td></td> <td></td> <td>0.2</td> </tr> <tr> <td>(E)</td> <td></td> <td>positional embedding instead of sinusoids</td> </tr> <tr> <td>big</td> <td>6</td> <td>1024</td> </tr> </tbody> </table> To Sum up Advantages: 1. Only used attention and got a very good result. 2. Not only NLP 3. Work well on GPU Limitations: 1. Losing the ability to capture the portion features. RNN+CNN+Transformer could be better 2. Position embedding did not change the defect of structure Thanks! Reference: http://jalammar.github.io/illustrated-transformer/
{"Source-Url": "https://web.cse.ohio-state.edu/~panda.2/5194/slides/4.i-4.j.Language-Attention.pdf", "len_cl100k_base": 8566, "olmocr-version": "0.1.50", "pdf-total-pages": 89, "total-fallback-pages": 0, "total-input-tokens": 106559, "total-output-tokens": 11805, "length": "2e13", "weborganizer": {"__label__adult": 0.0005831718444824219, "__label__art_design": 0.0013723373413085938, "__label__crime_law": 0.0005335807800292969, "__label__education_jobs": 0.005641937255859375, "__label__entertainment": 0.0004115104675292969, "__label__fashion_beauty": 0.00029778480529785156, "__label__finance_business": 0.0003769397735595703, "__label__food_dining": 0.0004603862762451172, "__label__games": 0.0012578964233398438, "__label__hardware": 0.0019016265869140625, "__label__health": 0.0005211830139160156, "__label__history": 0.0004687309265136719, "__label__home_hobbies": 0.00021517276763916016, "__label__industrial": 0.0005459785461425781, "__label__literature": 0.0020809173583984375, "__label__politics": 0.0009927749633789062, "__label__religion": 0.0013914108276367188, "__label__science_tech": 0.0885009765625, "__label__social_life": 0.0004808902740478515, "__label__software": 0.047760009765625, "__label__software_dev": 0.84326171875, "__label__sports_fitness": 0.0003075599670410156, "__label__transportation": 0.0004432201385498047, "__label__travel": 0.00019371509552001953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 28343, 0.06815]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 28343, 0.19412]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 28343, 0.84336]], "google_gemma-3-12b-it_contains_pii": [[0, 31, false], [31, 100, null], [100, 125, null], [125, 159, null], [159, 250, null], [250, 369, null], [369, 495, null], [495, 533, null], [533, 2776, null], [2776, 2863, null], [2863, 2973, null], [2973, 3260, null], [3260, 3282, null], [3282, 3778, null], [3778, 4028, null], [4028, 4387, null], [4387, 4695, null], [4695, 4784, null], [4784, 4792, null], [4792, 4995, null], [4995, 5167, null], [5167, 5187, null], [5187, 5939, null], [5939, 5952, null], [5952, 6703, null], [6703, 6914, null], [6914, 6952, null], [6952, 7176, null], [7176, 7631, null], [7631, 7654, null], [7654, 8902, null], [8902, 9774, null], [9774, 10264, null], [10264, 10399, null], [10399, 10591, null], [10591, 12020, null], [12020, 12575, null], [12575, 13628, null], [13628, 13750, null], [13750, 14157, null], [14157, 14662, null], [14662, 16349, null], [16349, 16950, null], [16950, 18555, null], [18555, 18567, null], [18567, 18636, null], [18636, 18676, null], [18676, 18920, null], [18920, 18988, null], [18988, 19000, null], [19000, 19026, null], [19026, 19121, null], [19121, 19121, null], [19121, 19258, null], [19258, 19284, null], [19284, 19448, null], [19448, 19486, null], [19486, 19590, null], [19590, 19616, null], [19616, 19625, null], [19625, 19651, null], [19651, 19666, null], [19666, 19726, null], [19726, 19726, null], [19726, 20349, null], [20349, 20569, null], [20569, 20870, null], [20870, 21141, null], [21141, 21167, null], [21167, 21213, null], [21213, 21234, null], [21234, 21234, null], [21234, 21420, null], [21420, 21790, null], [21790, 22218, null], [22218, 22244, null], [22244, 22358, null], [22358, 22858, null], [22858, 22878, null], [22878, 23101, null], [23101, 23424, null], [23424, 23431, null], [23431, 24610, null], [24610, 27998, null], [27998, 28008, null], [28008, 28107, null], [28107, 28274, null], [28274, 28282, null], [28282, 28343, null]], "google_gemma-3-12b-it_is_public_document": [[0, 31, true], [31, 100, null], [100, 125, null], [125, 159, null], [159, 250, null], [250, 369, null], [369, 495, null], [495, 533, null], [533, 2776, null], [2776, 2863, null], [2863, 2973, null], [2973, 3260, null], [3260, 3282, null], [3282, 3778, null], [3778, 4028, null], [4028, 4387, null], [4387, 4695, null], [4695, 4784, null], [4784, 4792, null], [4792, 4995, null], [4995, 5167, null], [5167, 5187, null], [5187, 5939, null], [5939, 5952, null], [5952, 6703, null], [6703, 6914, null], [6914, 6952, null], [6952, 7176, null], [7176, 7631, null], [7631, 7654, null], [7654, 8902, null], [8902, 9774, null], [9774, 10264, null], [10264, 10399, null], [10399, 10591, null], [10591, 12020, null], [12020, 12575, null], [12575, 13628, null], [13628, 13750, null], [13750, 14157, null], [14157, 14662, null], [14662, 16349, null], [16349, 16950, null], [16950, 18555, null], [18555, 18567, null], [18567, 18636, null], [18636, 18676, null], [18676, 18920, null], [18920, 18988, null], [18988, 19000, null], [19000, 19026, null], [19026, 19121, null], [19121, 19121, null], [19121, 19258, null], [19258, 19284, null], [19284, 19448, null], [19448, 19486, null], [19486, 19590, null], [19590, 19616, null], [19616, 19625, null], [19625, 19651, null], [19651, 19666, null], [19666, 19726, null], [19726, 19726, null], [19726, 20349, null], [20349, 20569, null], [20569, 20870, null], [20870, 21141, null], [21141, 21167, null], [21167, 21213, null], [21213, 21234, null], [21234, 21234, null], [21234, 21420, null], [21420, 21790, null], [21790, 22218, null], [22218, 22244, null], [22244, 22358, null], [22358, 22858, null], [22858, 22878, null], [22878, 23101, null], [23101, 23424, null], [23424, 23431, null], [23431, 24610, null], [24610, 27998, null], [27998, 28008, null], [28008, 28107, null], [28107, 28274, null], [28274, 28282, null], [28282, 28343, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 28343, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 28343, null]], "pdf_page_numbers": [[0, 31, 1], [31, 100, 2], [100, 125, 3], [125, 159, 4], [159, 250, 5], [250, 369, 6], [369, 495, 7], [495, 533, 8], [533, 2776, 9], [2776, 2863, 10], [2863, 2973, 11], [2973, 3260, 12], [3260, 3282, 13], [3282, 3778, 14], [3778, 4028, 15], [4028, 4387, 16], [4387, 4695, 17], [4695, 4784, 18], [4784, 4792, 19], [4792, 4995, 20], [4995, 5167, 21], [5167, 5187, 22], [5187, 5939, 23], [5939, 5952, 24], [5952, 6703, 25], [6703, 6914, 26], [6914, 6952, 27], [6952, 7176, 28], [7176, 7631, 29], [7631, 7654, 30], [7654, 8902, 31], [8902, 9774, 32], [9774, 10264, 33], [10264, 10399, 34], [10399, 10591, 35], [10591, 12020, 36], [12020, 12575, 37], [12575, 13628, 38], [13628, 13750, 39], [13750, 14157, 40], [14157, 14662, 41], [14662, 16349, 42], [16349, 16950, 43], [16950, 18555, 44], [18555, 18567, 45], [18567, 18636, 46], [18636, 18676, 47], [18676, 18920, 48], [18920, 18988, 49], [18988, 19000, 50], [19000, 19026, 51], [19026, 19121, 52], [19121, 19121, 53], [19121, 19258, 54], [19258, 19284, 55], [19284, 19448, 56], [19448, 19486, 57], [19486, 19590, 58], [19590, 19616, 59], [19616, 19625, 60], [19625, 19651, 61], [19651, 19666, 62], [19666, 19726, 63], [19726, 19726, 64], [19726, 20349, 65], [20349, 20569, 66], [20569, 20870, 67], [20870, 21141, 68], [21141, 21167, 69], [21167, 21213, 70], [21213, 21234, 71], [21234, 21234, 72], [21234, 21420, 73], [21420, 21790, 74], [21790, 22218, 75], [22218, 22244, 76], [22244, 22358, 77], [22358, 22858, 78], [22858, 22878, 79], [22878, 23101, 80], [23101, 23424, 81], [23424, 23431, 82], [23431, 24610, 83], [24610, 27998, 84], [27998, 28008, 85], [28008, 28107, 86], [28107, 28274, 87], [28274, 28282, 88], [28282, 28343, 89]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 28343, 0.20978]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
b8d129911438356dca3df2b05f6eca116ae508ff
Controlled conflict resolution for replicated document Stéphane Martin, Mehdi Ahmed-Nacer, Pascal Urso To cite this version: Stéphane Martin, Mehdi Ahmed-Nacer, Pascal Urso. Controlled conflict resolution for replicated document. 8th IEEE International Conference on Collaborative Computing: Networking, Applications and Worksharing, Oct 2012, Pittsburgh, Pennsylvania, United States. hal-00763410 HAL Id: hal-00763410 https://hal.archives-ouvertes.fr/hal-00763410 Submitted on 10 Dec 2012 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Abstract—Collaborative working is increasingly popular, but it presents challenges due to the need for high responsiveness and disconnected work support. To address these challenges, the data is optimistically replicated at the edges of the network, i.e., personal computers or mobile devices. This replication requires a merge mechanism that preserves the consistency and structure of the shared data subject to concurrent modifications. In this paper, we propose a design to ensure eventual consistency (every replica will eventually view the same data) and to maintain the specific constraints of the replicated data. Our layered design provides the application engineer the complete control over system scalability and behavior of the replicated data in face of concurrent modifications. We show that our design allows replication of complex data types with acceptable performances. Index Terms—optimistic replication, replicated document, collaborative editing I. INTRODUCTION Replication allows accessibility of shared data in collaborative tools (such as Google Docs) and mobile applications (such as Evernote or Dropbox). Indeed, collaboration is achieved by different distinct sites that work independently on a replica, i.e., a copy of the document. Due to high responsiveness and disconnected work requirements, such applications cannot use lock or consensus mechanisms. However, the CAP theorem [3] states that a replicated system cannot ensure strong Consistency together with Availability and Partition tolerance. In such applications, where availability is required by users and partition is unavoidable, a solution is temporal divergence of replicas, i.e., to use optimistic replication. Of course, at the end of the modification process, users aim to have the same document. This kind of consistency model is called “eventual consistency” which guarantees that if no new update is made to the object, eventually all accesses will return the same value. To obtain eventual consistency, a particular merge procedure that handles conflicting concurrent modifications, is required. We consider that two concurrent modifications conflict, if, once both integrated, they violate the structural constraints of a data type. For instance, with a replicated structured document, adding concurrently two titles conflicts if the document type accepts only one title. To obtain a conflict-free replicated data type, the merge procedure must make an arbitrary choice (such as: appending the titles, “priority-replica-wins”, “last-writer-wins”, etc.). Moreover, every replica must make independently the same choice. Conflict resolution is also a question of scalability and performances since different choice procedures may have different computational complexities. Unfortunately, eventual consistency is more difficult to achieve facing complex conflict resolution as demonstrated by the numerous proposed approaches that fail to ensure it for simple plain text document [7], [13]. Indeed, more the data type is complex, more conflicts appear. For instance, in a hierarchical document, modifications such as adding and removing an element, or adding a paragraph while removing the section to which it belongs, or setting concurrently two titles conflict. We propose a framework that decouples eventual consistency management from data type constraints satisfaction. Our framework is made of layers. A layer can use the result of one or more independent layers. The lowest layer hosts the replicated data structure and are in charge to merge concurrent modifications. These lowest layers encapsulate an existing eventually consistent data type from the literature. Other layers are in charge to ensure a constraint on a data type. It does not modify the inner state of the replicated data but only computes a view that satisfies the constraint. Our framework manages each conflict type independently while assuring eventual consistency. Thanks to layered design, any combination of conflict resolution is designable, giving to the application the entire control on the system scalability and behavior of the replicated data in face of concurrent mutations. II. MOTIVATION Our approach is based on the observation that obtaining eventual consistency while ensuring complex constraints on a data type is difficult. Thus, we propose to decouple eventual consistency from data integrity insurance trough layers. To illustrate the behavior of such a decoupling, let’s imagine a replicated file system. Ensuring eventual consistency of a file system is complex [5], while ensuring eventual consistency of a set can be achieved in numerous ways with quite simple algorithms. For instance, [17] defines multiple replicated sets with different behaviors and performances. So we can imagine a file system as the set of absolute paths present in the file system. 1) A first layer contains the set of independent couples \((path, type)\) which are elements present in the file system. Types can be directory or file. This layer communicates with the first layer of the other replicas. It transmits simple messages that correspond to an addition or a suppression in the set. This layer ensures alone eventual consistency by merging these messages. 2) The second layer is in charge of producing a tree from the set of paths. To produce this tree, it must ensure the constraint that all nodes are accessible by the root. Indeed if a replica removes a directory, while another adds a file into this directory, the path to the file is present in the set while the path to the directory is not. Such a layer may drop this “orphan” file or place it under some special “lost-and-found” directory (see Section IV-B). 3) The third layer is in charge of producing a file system from the tree. It satisfies the unique name constraint on a directory. Indeed, a directory may contains two children (one directory and one file) added concurrently with the same name. Such a layer may rename elements, or enforce specific name when adding an element (files and only files must have an extension, such as .java). Replicated file systems (and some other complex data types), already exist in the literature. The advantage of our model is twofold. The first advantage is that only the first layer is in charge of merging concurrent operations. For the other layers, the data is handled as local data, simplifying the eventual consistency issues. The second advantage is the modularity of the approach. A layer that provides a data type can be freely substituted by another implementation. Thus, our approach can provide many different behaviors, while each existing solution proposes only one or a small number of different behavior(s) with an associated performance level which could not be appropriate to every collaborative application context. III. LAYERED DATA TYPES We define a data type as an object with a two methods interface: i) the “lookup” method returns the data type state; ii) the “modify” method performs modifications in the data type state. A replicated data type is a data type with a communication interface to merge its state with other replicas. Concretely, on each update invocation from an application, the replicated data type sends to another replica a message that represents the local modification. A replicated data type which receives such a message, integrates it on its own state. We require that a replicated data type ensures eventual consistency. This means that, after all modifications were performed, the invocation of the lookup method eventually returns the same result. First, we encapsulate an existing eventually consistent data type in a replication layer. This kind of layer is the bottom layer of our model. It ensures communication between replicas and manages concurrent modifications. The other kind of layer we define is the adaptation layer that uses the data provided by one or more layers and ensures a particular constraint on the data type. An adaptation layer can be placed on top of one or more layers that can be replication or adaptation layers. As presented in Figure 1, the generic computational aspect of our model is quite simple. When an application modifies a data type, it calls the higher layer modify function. The higher layer adapts the given local operation into one or more local operation(s) applied on the layer just below. This layer will itself adapt these local operations for the third layer, and so on until the replication layer. Only the replication layer is in charge to communicate local updates to other replicas and to merge local and remote modifications. When the application asks for the value of the data type, it calls the higher layer lookup interface. The layer calls the lookup interface of the layer just below and computes a result corresponding to the application needs. ![Fig. 1. Layers](image) The lookup method of an adaptation layer recomputes totally its result from the inner layer(s) lookup invocation(s) result(s). This computation does not affect the inner-layer state, if any. Assuming this computation is deterministic and that the below layer(s) ensure(s) eventual consistency, we can prove straight-forwardly that the adaptation layer provides an eventually consistent data type. Such a computation must be done when a view is requested, but only if the inner data was modified since the last request. This is adapted to state-based replication mechanisms [16] (such as version control systems). State-based replication mechanisms transfer their whole state to other replicas, thus, fewer merge occurs but each merge may modify up to the whole state of the data. However, for operation-based replication mechanisms [16], we should define incremental adaptation layers. Operation-based replication mechanisms sends update operations (or differences). **Incremental Layers** An incremental adaptation layer stores the state of the data type that will be returned to the application. It modifies this data type each time its inner layer state is modified, following an observer design pattern, see Figure 2. Therefore, it modifies only a part of the data type. Potentially, an incremental lookup has better performances. Eventual consistency can be ensured by an equivalence between the incremental lookup and some non-incremental lookup. Anyway, as non-incremental layers, incremental layers computations do not affect their inner-layer state. Even if incremental layers seem more adapted to operation-based replication mechanisms, any combination of layers can be constructed. Indeed, a state-based replication layer that notifies changes to its observers can be used below an incremental layer. Also, an incremental layer can be used below a non-incremental one.\footnote{This last combination can be useful when no incremental solution is available for a given constraint (for XSD schema repairing for instance).} IV. Examples This section presents several examples of data types that can be obtained using our framework. Due to space limitation, only some of them will be completely detailed. A. Text data type In this section, we show how to obtain a text data type, i.e., an ordered sequence of elements (lines, character, or paragraphs, etc.). Beside its apparent simplicity, this a non-trivial problem as evidenced by the huge literature on the subject: [13], [24], [14]. The challenge comes from puzzles such as TP2-puzzles [22], where two elements are inserted concurrently just before and after an element which is being deleted. Since deleted elements no longer separate the inserted ones, they may be swapped. We present a composition of two layers to ensure the ordering constraint. We use a set element associated with an un-mutable ordering information called position identifier (PI). As presented in Figure 3, we define an adaptation ordering layer on top of a set replication layer. The set contains elements coupled with a position identifier (PI). For example, the sequence 'AC' corresponds to the set \{('A',p₀), ('C',pₐ)\}. To add 'B' between 'A' and 'C', we must forge p₀ such that pₐ < p₀ < pₐ. The set becomes \{('A',p₀), ('C',pₐ), ('B',p₀)\}. The "lookup" function uses the total order between PIs to compute the ordered sequence 'ABC'. ![Fig. 3. Text data type using sets](image) Position identifiers are defined in a dense space equipped with a total ordering relation. The total order ensures that any pair of elements appear in the same order on each replica. The space is dense to allow insertion of an element between any two others. In the literature, such spaces already exist. Logoot [26] and FCEdit [10] use integer or strings concatenated with unique identifiers; the ordering relation is a lexical ordering. The Treedoc [14] algorithm uses depth-first search on a binary tree as ordering. The position identifier of Treedoc is a path in this tree with unique identifiers to distinguish two similar paths. The algorithms cited above generate unique identifier (unique for all replicas). These identifiers are unique to ensure eventual consistency. So, when a same element is added concurrently at the same place, it is inserted twice with two different identifiers. For instance, if two users aim to correct the word 'ct' into 'cat', these algorithms add two 'a' and word becomes the 'caat'. In our framework, the set ensures the eventual consistency. So, we can relax the uniqueness of the position identifier. For instance, in Logoot positions, the operation timestamp could be replaced by the element itself. Thus, we will obtain a different behavior than the above algorithms since the concurrent insertion of two same element at the same position will lead to a unique appearance. This behavior may seem more natural to users and is the behavior (called "accidental clean merge") of most of the control version system software (Git, SVN, etc.). Obviously, all editing conflicts cannot be resolved using such approaches. However, thank to our layered framework, one can add a semantic correction layer such as [4] above our own layers. We define a couple object which contains a position identifier and a label. We assume that each ordering algorithm implements the interface described in Figure 4. ![Fig. 4. Interface of ordering algorithm](image) We define the Ordering layer in two versions: the non-incremental version in figure 5 and the incremental version in figure 6. The difference between two versions is the presence of the inner state. The non-incremental layer must order the set to have a lookup or to modify the sequence, while the incremental version uses its inner state to avoid re-computation. The application or upper layer invokes the modify function of ordering layer with operation as argument. This operation can be an add or delete operation. For both layer versions, the "add" operation parameters are an element (line, characters, ...) and an integer position. In this case, the layer gets the previous and next element PI from the lookup list. It generates a position identifier help with ordering algorithm between two PIs (generatePI) (1.9 fig 5 and fig 6) and store the couple with added element and generated position identifier in the inner set (1.15). In case of delete, the operation contains only the element position to remove. The modify function gets the element from lookup \[2\] Two 'a' added sequentially, for instance, in the word 'aardvark', will have different PIs. list (l.12) and forges the operation for deletion from the inner set (l.13). The difference between incremental and non-incremental version is: for non-incremental version, the lookup list is built from the inner set (using of the ordering algorithm) for each call (l.6 fig. 5); while the lookup of the incremental version returns its own up-to-date list (l.3 fig. 6). In incremental case, when the inner set is modified by local or remote operation the layer is notified and update function is called. The update function places the new element in the layer state in position given by ordering algorithm (l.22 fig 6) or deletes from layer state the element which, contains the position (l.24). ```java class OrderingLayer{ Ordering algo; void modify(SequenceOperation change){ SetOperation op; List <Couple> list = lookup(); //Reordering if (change.type == add){ int pos=change.position; PI pi = algo.generatePI(list.get(pos), list.get(pos+1)); op = new SetOperation(add, new Couple(change.label, pi)); } else{ //del operation Couple c = list.get(change.position); op = new SetOperation(del, c); } innerSet.modify(op); } list lookup(){ return algo.order(innerSet.lookup); } } ``` Fig. 5. Non-Incremental Sequence layer **B. Unordered tree** In this section, we design replicated unordered trees. The unordered tree node contains a Label $\in \Sigma$, a father and a set of children. The root is a special node without father and label. As presented in Figure 7, to provide this tree, the layer uses a set of paths. More formally, we define a path as a sequence of label: $p \in Path, p = l_1l_2...l_n, l_i \in \Sigma, \forall i \in [1..n]$. Each path in this set represents a node. For example, the tree drew in figure 8 is represented by $\{a, ab, ac\}$. In this example, when the replica 2 adds $c$ under $b$ the word $abc$ is added in inner set. When the replica 1 removes $b$, the word $ab$ is deleted in inner set. In second time, both replica exchange these operations and those states become $\{a, ac, abc\}$. This set does not represent directly a tree because the node $b$ is not present and has one child. We call the path $abc$, respectively the node represented by this path, an orphan path respectively an orphan node. In this case, there are different ways to adapt the tree from the path set. Each way makes a different behavior. In Figure 9, we present four different behaviours: i) Skip behaviour does not return orphan nodes; ii) Reappear behaviour returns the orphan node at their original path; if the node $abc$ is finally deleted, $ab$ disappears; iii) Root behaviour places orphans under a specific directory (root or lost-and-found); iv) Compact behaviour moves $c$ node under node $a$, both $ac$ are merged. ```java class OrderingLayer{ Ordering algo; List <Couple> list; void modify(SequenceOperation change){ SetOperation op; if (change.type == add){ int pos=change.position; PI pi = algo.generatePI(list.get(pos), list.get(pos+1)); op = new SetOperation(add, new Couple(change.label, pi)); } else{ //del operation Couple c = list.get(change.position); op = new SetOperation(del, c); } innerSet.modify(op); } void update(SetOperation change){ Couple couple = change.label; if (change.type == add){ int pos = getPos(couple.pi, list ); list .add(pos, couple); } else{ //delete list .remove(couple); } } list lookup(){ return list ; } } ``` Fig. 6. Incremental Sequence layer **update** ○ ○ **lookup** **Tree Connect** **update** ○ ○ **lookup** **Replicated Set** send receive Fig. 7. Layered tree More formally, we call an orphan path, a path in the inner set lookup ($LS$) that has a prefix which is not in $LS$. We start by adding all non-orphan paths of $LS$ to lookup of the tree ($LT$). Then, we treat the orphan paths in $LS$ in length order (shortest first, then $\Sigma$ order). Considering each orphan path $a_1a_2...a_n \in LS$ with $\forall i \in [1..n]$. $a_i \in \Sigma$, we can apply the following connection policies: **skip**: drops the orphan path. **reappear**: recreates the path leading to the orphan path. We add all $a_1...a_j$ with $j \in [1..n]$. **root**: places the orphan subtree under the root. We add $a_j...a_n$ to $LT$ with $j$ such that $a_1...a_{j-1} \notin LS$ and $\forall k \in [j..n], a_1...a_k \in LS$. **compact**: places the orphan subtree under its longest non-orphan prefix. We add $a_1...a_ma_j...a_n$ to $LT$ with $j$ and $m$ such that $m < j$ and $a_1...a_m \in LT$ and $a_1...a_{m-1} \notin LS$ and $a_1...a_{j-1} \notin LS$ and $\forall k \in [j..n], a_1...a_k \in LS$. Using any of the above policies ensures that the lookup trees presented to the client by any layered tree are eventually consistent. Indeed, we assume that the inner set is eventually consistent. Since the tree lookup is deterministically computed 49. prototype of this operation is Operation(Optype optype, Path path) to add the new node or the node to remove. The constructor type of operation (add or delete) and a path. The path set. 50. algorithm looking for all children to remove from the inner set. By chance, in this policy the path is not modified. Thus, add operation is not modified. However, to a path for inner set. By chance, in this policy the path is nothing left. In our example b is a ghost (see Fig. 9iii)). 51. update function for incremental reappear policy 52. 3) Root policy: The root algorithm moves all orphan nodes to the root or some special “lost-and-found” directory. The update function of this algorithms is presented in figure 11. When two nodes with same label are orphans, the orphans are merged and the view presents only one node under the root. The internal state of the connecting layer is a decorated tree. Nodes are decorated with Paths, the set of original paths leading to the node. The connecting layer also uses path2node, a map to link original paths to the node objects. 53. When a node is added, if this path is prefix of orphans 54. 3Due to space limitation, skip and compact policies are not presented but are implemented in our open-source framework. 55. Fig. 10. Update function for incremental reappear policy 56. paths, then all corresponding nodes are reattached by move function. The move function looks for all prefixes in Paths of all children of the root node and removes them. It adds the node to reattach and adds this prefix. All nodes with empty Paths are deleted. 57. The modify function browses the tree through a path, takes the last node and forges the operation with the Paths. For example, in case of add operation, the modify function adds each element of Paths concatenated by new label and in case of delete operation it deletes every path present is Paths. 58. In our example9iii), when b is deleted and c is added under b, the c is moved under the root. However, a node c is already under the root. Two nodes c fusion and c contains the path c and path abc. 59. C. Ordered Tree Data Type 60. In this section, we design ordered tree. As presented in Figure 12i), we directly use the unordered tree data structure and we add an ordering layer. To order the children of a node we use Position Identifier (introduced in Section IV-A). We mark all labels with a position identifier. Therefore, the nodes become totally ordered. The set of paths, managed by the replication layer, is represented by $p = (l_1, p_1) \cdots (l_n, p_n)$ with $l_i \in \Sigma$ a label and $p_i$ a position identifier. However, the modify interface of the tree ordering layer must be independent of the chosen ordering algorithm. The ordering layer interface receives operation based on a path defined on integer position without label (ex : 2.4.5.1). Each integer position corresponds // move node identified by path from srcFather to dest void move(Node srcFather, Node dest, List path) { // Make path with prefix and label / List childPath = new Path(path, child.getValue()); if (!node.containsGoodPrefix()) { child.del(childPath); Node node = dest.add(child.label, childPath); path2node.put(childPath, node); } else { Node node = father.add(path, node, childPath); path2node.put(path, node); } } void Update(SetOperation change) { Path path = change.getContent(); if (change.getAction() == add) { // Add Path fatherPath = path.clone(); Label last = fatherPath.removeLast(); Node father = tree.path2node.get(fatherPath); if (father == null) // Orphan node father = tree.root; Node node = father.add(path, node, path); path2node.put(path, node); } else { // Remove Node node = tree.path2node.get(path); move(node, root, path); del(node, path); // remove if paths is empty } } Fig. 11. Update function for Incremental root policy Fig. 12. Ordered tree to a children number in the ordered tree. For example, consider the tree on the Figure 12ii). The inner replicated set contains \{a_{p_a}, b_{p_b}, a_{p_c}, c_{p_c}, a_{p_d}, d_{p_d}\} with p_b \prec p_a and p_c \prec p_d. The ordered path leading to c is 2.1. In fact, in a similar way as an unordered tree, the layer state contains nodes, but, each node, contains additionally the position identifier and each child is ordered by chosen ordering algorithm. The modify function converts an integer position path j_1...j_n, j_i \in \mathbb{N} into a path containing couples of label and position identifier. It browses through the tree and pushes the double of label and position identifier for each node, until the last but one. If the operation is an add, the last position identi- position j_n. This holds as the last position of the path is the new node. In case of delete operation, the modify function converts all of path. The update function receives a path with label and positions identifier from the inner set. It browses through the tree until the last node but one of the path. The algorithm can use a Hashmap or dichotomy algorithm to find a node in the children ordered list. In case of add operation, the update function adds the new node in good place defined by ordering relation. In case of delete, the update function deletes the node. D. Extension to schema In this section, we consider ordered trees with schema (such as XSD or DTD for XML documents). Concurrent modifications can produce a tree which does not respect the schema. For example, consider a schema which accepts zero to one element. If two users add concurrently a title, they will create two title nodes in the internal tree data type. To fix it, we add a new layer called schema repair. In this layer (see Fig. 13), lookup interface calls a repair algorithm (such as [19]) to return a valid tree. The “modify” must ensure that each operation generated on lookup view is valid on internal For example, in an agenda, we assume that under a partic- Fig. 13. Tree with schema Optimization with DTD schema: The particularity of DTD schema is a poor language. An add or remove of a node can invalidate only a part of the tree. It’s possible to use a sub- quadratic algorithm [27] to approximate regular expression matching on children to fix the tree. All added edges by this algorithm could be added with a template of recursive valid children. E. Directed acyclic graph This kind of data type can be used for task dependence representation, such as Gantt or Pert diagram. In this example, we use two replicated sets: a set of nodes and a set of edges. The nodes represent the tasks, and the edges represent the dependency between the tasks. Two concurrent dependency additions conflict when they introduce a cycle in the graph. An un-cycling layer resolves such conflict by traversing the graph using a breath-first search (see Fig. 14). ![Directed acyclic graph](image) V. EXPERIMENTAL EVALUATION To evaluate the performances of our approach, we have implemented it in the framework ReplicationBenchmark developed in Java, available on the GitHub platform under the terms of the GPL license. In this framework, we have implemented different set layers, different ordering algorithms, the connecting layer with the four policies described Section IV-B and the tree ordering layer described Section IV-C. The framework follow our layer structure. For instance, creating a ordered tree based on a reappearance policy and a counter replicated set is done by the following Java expression: ```java new PositionIdentifierTree(new WordTree(new ReappearancePolicy(), new CounterSet())); ``` The framework provides base classes for common elements, such as a version vector, set, tree and ordered tree operations. The framework provides a simulator that generates a trace of operations randomly, according to provided parameters such as trace length, percentage of adding, removing, number of replica, communication delay, etc. It also provides a controlled simulation environment that replays a trace of operations and measures the performance of the replicated algorithms. The simulation ensures that each replica receives operations in the order as defined in the logs. The framework lets replicas of every algorithm generate operations in its own formats for the given trace operations provided from the simulated logs. The trace obtained to run our experiment has 30000 operations with 88% of insertions and four replicas. The trace is available on the web. We denote a local operation an operation appearing in the trace. Such operation will be given to the modify interface. For ordered tree, operations are insertion of an element or deletion of a sub-tree. A local operation is divided into one to several remote operation that the simulation sends to remote replicas. A replica, therefore, executes remote operation. We measure the net execution time of local and remote operations for each algorithm. The framework uses `java.lang.System.nanoTime()` for the measurement of execution time of each local operation and each remote operation. To obtain a correct result, we ran each algorithm on traces three times on the same JVM execution. We also measure the size memory occupied by each algorithm. We serialize each document replica by using Java serialization after each hundred operations generated, and measure the size of the serialized object. All executions are run on the same JVM, on a dual-processor machine with Intel(R) Xeon(R) 5160 dual-core processor (4Mb Cache, 3.00 GHz, 1333 MHz FSB), that has installed GNU/Linux 2.6.9-5. During the experiment, only one core was used for measurement. All graphics are smoothed by bezier curves. Before the representation output result of the experiment, we briefly describe some representative algorithms that exist and which we will compare our approach. A. TreeOpt and OTTree TreeOPT (tree OPerational Transformation) [6] is a general algorithm designed for hierarchical documents and semi-structured documents. Each node contains an instance of an operation transformation algorithm [2], [15], [21]. The algorithm applies the operational transformation mechanism recursively over the different document levels. In our experimentation, we have used this algorithm with SOCT2 [20] algorithm and TTF (Tombstone Transformation Functions) approach [13]. For little optimization, we save only insertion operation in log of SOCT2. The OTTree, an unpublished algorithm, uses only one instance of SOCT2 for entire the tree (not on each node) and TTF on each children list. The operation of TTF and its integration function were modified to include the path information. B. FCEdit FCEdit [10] is a CRDT designed for collaborative editing of semi-structured documents. It associates to each element a unique identifier. FCEdit maps identifier → node. So it uses just an hash table to find an element in the tree. Each child is ordered by a position identifier. Unlike OTTree, FCEdit does not need to store an element in tombstone. The elements are really deleted from tree making it more efficient in memory. In the following, we present behaviors of each ordered tree algorithms executed on simulated traces with the different policies described in Section IV. C. Execution times In [8], studies have shown that users can comfortably observe modifications on their application if the local and remote response time do not exceed 50 ms. In this section, we address an experimental evaluation of algorithms based on our layer structure, compared to existing ones to verify if this design is suitable for real time collaborative applications. 1) Skip policy: a) Local operations: The average execution time of Local operations are presented in figure 15. The performances of the algorithms based on the layer structure (Logoot and WOOTH) are the less efficient compare to the algorithms that exist (OTTree and FCEdit), but it remains stable throughout the experiment. They do not exceed 30µs, and thus 50 ms, what makes them acceptable for the users. The performances of OTTree and TreeOPT based on SOCT2 algorithm degrade in the beginning of experiment, since the rate of insertion is greater than the deletion, the tree becomes quickly large. TreeOPT makes an operation by each element of the path contrary to OTTree. This explains that the difference of both algorithms depends of tree depth. After the 100 000 operations, the majority of algorithms become stable. FCEdit is the best algorithm since each node is identified by an unique identifier, using a hash table to link identifiers and node, they obtain a result with a complexity around $O(1+n/k)$ in the average case. Such a "trick" is only possible since FCEdit uses a unique identifiers. The global performance behaviors of Logoot and WOOTH are quite similar, even if they are very different algorithms. This proves that the layer structure cost in performance, but this remains stable and does not exceed 30 ms. b) Remote operations: In Figure 16 we present an execution time behaviours of algorithms using a skip policy for the remote operations on logarithmic scale. To simulate a real experiment, the garbage collection mechanism of SOCT2 is disabled. Indeed, when users may disconnect, a garbage collection mechanism of SOCT2 cannot purge the history. The performances of OTTree and TreeOPT degrade over time since SOCT2 algorithm can not purge the history. Thus, the whole of operations received are stored in the history and it takes time to separate concurrent operations and transforms them that makes the algorithm the least efficient. Indeed, even if some garbage collection mechanisms exist, we consider that they can not be used in a general context where the number of replicas is unknown and fluctuating. As locally, the behaviors of Logoot and WOOTH algorithms remains stable, although these algorithms are based on layer structure, they outperform OTTree and TreeOPT with 10µs compare to 10 ms. The performance of FCEdit remains good and stable during all experiments, with just 3µs it represents the best algorithm in our experiment. 2) Compare policies: In what follows, we will present the behaviors of Logoot algorithm with different policies that exist and also WOOTH with reappear policy. For ordered tree based on WOOTH algorithm, a root and compact policies are not permitted. Because, we cannot merge different nodes that depends by their previous and next element with another located in different origin. a) Local operations: In Figure 17 the global performance behaviors are the same excepted for root policy. In both policies, the algorithm must move all subtree deleted. In case of root policy, it moves under the root while for compact policy it moves under the last father on the tree. In the case where the node located in the origin path has a same label as the node in the new path, the two nodes are merged. Since, number of nodes located under the root in root policy are greater than the number of children under a node in compact policy, the time lost to find the nodes with the same label in root policy takes more time than for compact policy. Indeed, all nodes deleted in the tree are located under the root whereas in compact policy, a node contains his children and the nodes removed from their child. b) Remote operations: The behavior of the different algorithm for remote operation presented in 18 is a slightly different compared to figure 17 since the behaviors are more chaotic for the root policy. The behavior of Logoot with skip policy is the most stable. The average time of execution remains around 10µs. As previously, the root behavior is the least efficient and the most chaotic. It improves when a replica deletes a path from the tree, as in operation number 6000 or 23000. In both algorithms Logoot and WOOTH with Reappear policy and also Logoot Finally, although some algorithms are less efficient than other, the execution time never exceeds 1 ms (far below 50 ms). And almost every algorithm has a very stable behaviour below 30 µs. The Algorithms based on layer structure are acceptable and suitable for real-time collaboration. Moreover, they outperform some representative operational transformation as OTTree. D. Memory occupation Size of memory occupied by each studied algorithm may increase over time due to history, tombstones or growing identifiers. We present in the following, the algorithms behavior regarding memory usage in case of skip policy on logarithmic scale illustrated in figure 19. A tree based on WOOTH algorithm occupies more memory compared to other tree algorithms, since in WOOTH an identifier is never deleted but just stored in tombstone and marked as invisible to users. OTTree and tree based on Logoot algorithm have almost the same behavior. The memory size occupied by Logoot depends of the size of identifiers Logoot, whereas OTTree depends of number operation generated. Indeed, SOCT2 used in OTTree stores all operations in history, in addition, the garbage collector was quenched, moreover a deleted node is never removed. TreeOPT consumes more memory than OTTree because each node has a SOCT2 instance with a log. FCEdit remains the best algorithm regarding the memory space requirement since the identifiers are less cost than Logoot and the nodes removed are really deleted contrary to WOOTH and OTTree. VI. RELATED WORK Some collaborative system, such as version control system (Git, SVN, etc.), or distributed file systems [5] relies on human merging phases for some conflict cases, while some conflicts are resolved automatically. For instance, SVN creates a "tree conflict" when a file is created in a concurrently deleted directory. On the other hand, Git behavior is similar to "reappear policy" (see Section IV-B) since it recreates silently the directory. However, human conflict resolving does not scale to massive collaboration use cases, and complex data types conflicts may be difficult to represent and resolve. For instance, Git is unable to merge correctly XML files. Our approach computes automatically a best effort merge, and can be combined to awareness mechanisms [1] to allow users to be conscious of concurrent modifications. There exists many systems which satisfy the eventual consistency properties. Industrial systems, such as No-SQL data-stores (Amazon S3, CouchDB, Cassandra, etc.), relies on eventual consistency, but only manage key-value data types. Bayou [23] and Icecube [9] systems use constraints resolution mechanisms to resolve the conflicts. So, they can ensure generic data types constraints. But, these approaches do not scale well since they require a central or primary server and, as in version control systems, the system is not stable as soon as the update are delivered, since their merge procedures produce new operations. Replicated data types are well-known in the literature. For instance, there exists sets [18], sequences [13], [25], trees [12], file systems [5], etc. In Operational Transformation (OT) [2], replicas transform received operations against concurrent ones. The OT approach has been successfully applied on several general public collaborative editing software, including Google Docs. Conflict-free Replicated Data Types (CRDT) [18] aims to design replicated data-types that integrate remote modifications without transformation. The goal of our approach is encapsulate any eventually consistent approach (OT or CRDT) in a replication layer and to design adaptation layer provide to satisfy non-trivial constraints. For instance, in our implementation (see Section V), we have implemented and tested trees layers on top of both different CRDT sets and OT sets. VII. CONCLUSION In this paper, we have presented a layered approach to design eventually consistent data types. Our approach composes one or several existing replicated data types which ensure eventual consistency, and adaptation layers to obtain a new eventually consistent data type. Each layer or replicated data type can be freely substituted by one providing the same interface. We have demonstrated that our approach is implementable and obtains acceptable performances, even if these performance are sometimes slightly worse than some specific algorithms. Our experiments and implementation are public available and re-playable. Compared to existing solutions, the composition design can fit precisely the distributed application engineer wishes in terms of behavior and scalability. In the future works, we will run experiments on a real data like git software histories and we will formally establish the equivalence proof between incremental and non-incremental algorithms. ACKNOWLEDGEMENT This work is partially supported by the ANR national research grants STREAMS (ANR-10-SEGI-010) and ConcoR-DanT (ANR-10-BLAN 0208). REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00763410/file/main.pdf", "len_cl100k_base": 9006, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 37473, "total-output-tokens": 11688, "length": "2e13", "weborganizer": {"__label__adult": 0.0002980232238769531, "__label__art_design": 0.0005064010620117188, "__label__crime_law": 0.0003552436828613281, "__label__education_jobs": 0.0015344619750976562, "__label__entertainment": 0.0001310110092163086, "__label__fashion_beauty": 0.00017189979553222656, "__label__finance_business": 0.0003767013549804687, "__label__food_dining": 0.0003497600555419922, "__label__games": 0.0007200241088867188, "__label__hardware": 0.0012063980102539062, "__label__health": 0.0005402565002441406, "__label__history": 0.00040841102600097656, "__label__home_hobbies": 0.00010967254638671876, "__label__industrial": 0.00045108795166015625, "__label__literature": 0.0004351139068603515, "__label__politics": 0.00030493736267089844, "__label__religion": 0.00044083595275878906, "__label__science_tech": 0.1663818359375, "__label__social_life": 0.0001544952392578125, "__label__software": 0.0304718017578125, "__label__software_dev": 0.7939453125, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.0004601478576660156, "__label__travel": 0.0002200603485107422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48609, 0.0158]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48609, 0.47874]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48609, 0.86951]], "google_gemma-3-12b-it_contains_pii": [[0, 1039, false], [1039, 6196, null], [6196, 11965, null], [11965, 16517, null], [16517, 21672, null], [21672, 24567, null], [24567, 28024, null], [28024, 33234, null], [33234, 37685, null], [37685, 40915, null], [40915, 48609, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1039, true], [1039, 6196, null], [6196, 11965, null], [11965, 16517, null], [16517, 21672, null], [21672, 24567, null], [24567, 28024, null], [28024, 33234, null], [33234, 37685, null], [37685, 40915, null], [40915, 48609, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48609, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48609, null]], "pdf_page_numbers": [[0, 1039, 1], [1039, 6196, 2], [6196, 11965, 3], [11965, 16517, 4], [16517, 21672, 5], [21672, 24567, 6], [24567, 28024, 7], [28024, 33234, 8], [33234, 37685, 9], [37685, 40915, 10], [40915, 48609, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48609, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
a4b6ef11d03ff313d5dbb716349afbd1ef2af9cb
Chapter 3 Mining Frequent Patterns in Data Streams at Multiple Time Granularities Chris Giannella*, Jiawei Han†, Jian Pei‡, Xifeng Yan†, Philip S. Yu‡ *Indiana University, cgiannel@cs.indiana.edu †University of Illinois at Urbana-Champaign, {hanj,xyan}@cs.uiuc.edu ‡State University of New York at Buffalo, jianpei@cse.buffalo.edu §IBM T. J. Watson Research Center, psyu@us.ibm.com Abstract: Although frequent-pattern mining has been widely studied and used, it is challenging to extend it to data streams. Compared with mining a static transaction data set, the streaming case has far more information to track and far greater complexity to manage. Infrequent items can become frequent later on and hence cannot be ignored. The storage structure need be dynamically adjusted to reflect the evolution of itemset frequencies over time. In this paper, we propose an approach based on computing and maintaining all the frequent patterns (which is usually more stable and smaller than the streaming data) and dynamically updating them with the incoming data stream. We extended the framework to mine time-sensitive patterns with approximate support guarantee. We incrementally maintain tilted-time windows for each pattern at multiple time granularities. Interesting queries can be constructed and answered under this framework. Moreover, inspired by the fact that the FP-tree provides an effective data structure for frequent pattern mining, we develop FP-stream, an FP-tree-based data structure for maintaining time sensitive frequency information about patterns in data streams. The FP-stream can scanned to mine frequent patterns over multiple time granularities. An FP-stream structure consists of (a) an in-memory frequent pattern-tree to capture the frequent and sub-frequent itemset information, and (b) a tilted-time window table for each frequent pattern. Efficient algorithms for constructing, maintaining and updating an FP-stream structure over data streams are explored. Our analysis and experiments show that it is realistic to maintain an FP-stream in data stream environments even with limited main memory. **Keywords:** frequent pattern, data stream, stream data mining. ### 3.1 Introduction Frequent-pattern mining has been studied extensively in data mining, with many algorithms proposed and implemented (for example, Apriori [1], FP-growth [10], CLOSET [17], and CHARM [19]). Frequent pattern mining and its associated methods have been popularly used in association rule mining [1], sequential pattern mining [2], structured pattern mining [13], iceberg cube computation [4], cube gradient analysis [12], associative classification [14], frequent pattern-based clustering [18], and so on. Recent emerging applications, such as network traffic analysis, web click stream mining, power consumption measurement, sensor network data analysis, and dynamic tracing of stock fluctuation, call for study of a new kind of data, stream data. Stream data takes the form of continuous, potentially infinite data streams, as opposed to finite, statically stored data sets. Stream data management systems and continuous stream query processors are under intense investigation and development. Besides querying data streams, another important task is to mine data streams for interesting patterns. There are some recent studies on mining data streams, including classification of stream data [7, 11] and clustering data streams [9, 16]. However, it is challenging to mine frequent patterns in data streams because mining frequent itemsets is essentially a set of join operations as illustrated in Apriori whereas join is a typical blocking operator, i.e., computation for any itemset cannot complete before seeing the past and future data sets. Since one can only maintain a limited size window due to the huge amount of stream data, it is difficult to mine and update frequent patterns in a dynamic, data stream environment. In this paper, we study this problem and propose a new methodology: mining time-sensitive data streams. Previous work [15] studied the landmark model, which mines frequent patterns in data streams by assuming that patterns are measured from the start of the stream up to the current moment. The landmark model may not be desirable since the set of frequent patterns usually are time-sensitive and in many cases, changes of patterns and their trends are more interesting than patterns themselves. For example, a shopping transaction stream could start long time ago (e.g., a few years ago), and the model constructed by treating all the transactions, old or new, equally cannot be very useful at guiding the current business since some old items may have lost their attraction; fashion and seasonal products may change from time to time. Moreover, one may not only want to fade (e.g., reduce the weight of) old transactions but also to find changes or evolution of frequent patterns with time. In network monitoring, the changes of the frequent patterns in the past several minutes are valuable and can be used for detection of network intrusions [6]. In our design, we actively maintain pattern frequency histories under a tilted-time window framework in order to answer time-sensitive queries. A collection of patterns along with their frequency histories are compressed and stored using a tree structure similar to FP-tree [10] and updated incrementally with incoming transactions. In [10], the FP-tree provides a base structure to facilitate mining in a static batch environment. In this paper, an FP-tree is used for storing transactions for the current time window; in addition, a similar tree structure, called pattern-tree, is used to store collections of itemsets and their frequency histories. Our time-sensitive stream mining data structure, FP-stream, includes two major components: (1) an pattern-tree, and (2) tilted-time windows. We summarize the contributions of the paper. First, we develop a data structure, FP-stream, supporting time-sensitive mining of frequent patterns in a data stream. Next, we develop an efficient algorithm to incrementally maintain an FP-stream. Third, we describe how time-sensitive queries can be answered over data streams with an error bound guarantee. The remainder of the paper is organized as follows. Section 3.2 presents the problem definition and provides a basic analysis of the problem. Section 3.3 presents the FP-stream data structure. Section 3.4 introduces the maintenance of tilted-time windows, while Section 3.5 discusses the issues of minimum support. The algorithm is outlined in Section 3.6. Section 3.7 reports the results of our experiments and performance study. Section 3.8 discusses how the FP-stream can be extended to included fading time windows. Section 3.9 discusses some of the broader issues in stream data mining and how our approach applies. 3.2 Problem Definition and Analysis Our task is to mine frequent patterns over arbitrary time intervals in a data stream assuming that one can only see the set of transactions in a limited size window at any moment. To study frequent pattern mining in data streams, we first examine the same problem in a transaction database. To justify whether a single item \( i_a \) is frequent in a transaction database \( DB \), simply scan \( DB \) and count the number of transactions in which \( i_a \) appears (the frequency). The frequency of every single item can be computed in one scan of \( DB \). However, it is too costly to compute, in one scan, the frequency of every possible combination of single items because of the huge number of such combinations. An efficient alternative proposed in the Apriori algorithm [1] is to count only those itemsets whose every proper subset is frequent. That is, at the \( k \)-th scan of \( DB \), derive the frequent itemsets of length \( k \) (where \( k \geq 1 \)), and then derive the set of length \((k+1)\) candidate itemsets (i.e. those whose every length \(k\) subset is frequent) for the next scan. There are two difficulties in using an \textit{Apriori}-like algorithm in a data stream environment. Frequent itemset mining by \textit{Apriori} is essentially a set of join operations as shown in [1]. However, join is a typical \textit{blocking operator} [3] which cannot be performed over stream data since one can only observe at any moment a very limited size window of a data stream. To ensure the completeness of frequent patterns for stream data, it is necessary to store not only the information related to frequent items, but also that related to infrequent ones. If the information about the \textit{currently infrequent items} were not stored, such information would be lost. If these items become frequent later, it would be impossible to figure out their correct overall support and their connections with other items. However, it is also unrealistic to hold all streaming data in the limited main memory. Thus, we divide patterns into three categories: \textit{frequent patterns}, \textit{subfrequent patterns}, and \textit{infrequent patterns}. \textbf{Definition 1} The frequency of an itemset \(I\) over a time period \(T\) is the number of transactions in \(T\) in which \(I\) occurs. The support of \(I\) is the frequency divided by the total number of transactions observed in \(T\). Let the min\_support be \(\sigma\) and the relaxation ratio be \(\rho = \epsilon/\sigma\), where \(\epsilon\) is the maximum support error. \(I\) is frequent if its support is no less than \(\sigma\); it is sub-frequent if its support is less than \(\sigma\) but no less than \(\sigma - \epsilon\); otherwise, it is infrequent. We are only interested in frequent patterns. But we have to maintain subfrequent patterns since they may become frequent later. We want to discard infrequent patterns since the number of infrequent patterns are really large and the loss of support from infrequent patterns will not affect the calculated support too much. The definition of frequent, subfrequent, and infrequent patterns is actually relative to period \(T\). For example, a pattern \(I\) may be subfrequent over a period \(T_1\), but it is possible that it becomes infrequent over a longer period \(T_2\) \((T_1 \subset T_2)\). In this case, we can conclude that \(I\) will not be frequent over period \(T_2\). In our design, the complete structure, \textit{FP-stream}, consists of two parts: (1) a global frequent pattern-tree held in main memory, and (2) tilted-time windows embedded in this pattern-tree. Incremental updates can be performed on both parts of the \textit{FP-stream}. Incremental updates occur when some infrequent patterns become (sub)frequent, or vice versa. At any moment, the set of frequent patterns over a period can be obtained from \textit{FP-stream} residing in the main memory (with a support error bounded above by \(\epsilon\)). \section*{3.3 Mining Time-Sensitive Frequent Patterns in Data Streams} The design of the \textit{tilted-time window} [5] is based on the fact that people are often interested in recent changes at a fine granularity, but long term changes at a coarse granularity. Fig. 3.1 shows such a tilted-time window: the most recent 4 quarters of an hour, then the last 24 hours, and 31 days. Based on this model, one can compute frequent itemsets in the last hour with the precision of a quarter of an hour, the last day with the precision of an hour, etc. This model registers only $4 + 24 + 31 = 59$ units of time, with an acceptable trade-off of lower granularity at distant times. ![Figure 3.1: Natural Tilted-Time Window Frames](image) As shown in Figure 3.2, for each tilted-time window, a collection of patterns and their frequencies can be maintained. Assuming these collections contain the frequent patterns (and possibly more), the following queries can be answered: (1) what is the frequent pattern set over the period $t_2$ and $t_3$? (2) what are the periods when $(a, b)$ is frequent? (3) does the support of $(a)$ change dramatically in the period from $t_0$ to $t_0$? and so on. That is, one can (1) mine frequent patterns in the current window, (2) mine frequent patterns over time ranges with granularity confined by the specification of window size and boundary, (3) put different weights on different windows to mine various kinds of weighted frequent patterns, and (4) mine evolution of frequent patterns based on the changes of their occurrences in a sequence of windows. Thus we have the flexibility to mine a variety of frequent patterns associated with time. ![Figure 3.2: Pattern Frequencies for Tilted-Time Windows](image) A compact tree representation of the pattern collections, called pattern-tree, can be used. Figure 3.3 shows an example. Each node in the pattern tree represents a pattern (from root to this node) and its frequency is recorded in the node. This tree shares a similar structure with an FP-tree. The difference is that it stores patterns instead of transactions. In fact, we can use the same FP-tree construction method in [10] to build this tree by taking the set of patterns as input. The patterns in adjacent time windows will likely be very similar. Therefore, the tree structure for different tilted-time windows will likely have considerable overlap. Embedding the tilted-time window structure into each node, will likely save considerable space. Thus we propose to use only one pattern tree, where at each node, the frequency for each tilted-time window is maintained. Figure 3.4 shows an example of a pattern tree with tilted-time windows embedded. We call this structure an FP-stream. 3.4 Maintaining Tilted-Time Windows With the arrival of new data, the tilted-time window table will grow. In order to make the table compact, tilted-time window maintenance mechanisms are developed based on a tilted-time window construction strategy. 3.4.1 Natural Tilted-Time Window For the natural tilted-time window discussed before (shown in Figure 3.1), the maintenance of windows is straightforward. When four quarters are accumulated, they merge together to constitute one hour. After 24 hours are accumulated, one day is built. In the natural tilted-time window, at most 59 tilted windows need to be maintained for a period of one month. In the following section, we introduce a logarithmic tilted-time window schema which will reduce the number of tilted-time windows used. 3.4.2 Logarithmic Tilted-Time Window As an alternative, the tilted-time window frame can also be constructed based on a logarithmic time scale as shown in Figure 3.5. Suppose the current window holds the transactions in the current quarter. Then the remaining slots are for the last quarter, the next two quarters, 4 quarters, 8 quarters, 16 quarters, etc., growing at an exponential rate of 2. According to this model, one year of data will require \( \log_2(365 \times 24 \times 4) + 1 \approx 17 \) units of time instead of \( 366 \times 24 \times 4 = 35,136 \) units. As we can see, the logarithmic tilted-time window schema is very space-efficient. ![Figure 3.5: Tilted-Time Window Frame with Logarithmic Partition](image) Formally, we assume that the stream of transactions is broken up into fixed sized batches \( B_1, B_2, \ldots, B_n, \ldots \), where \( B_n \) is the most current batch and \( B_1 \) the oldest. For \( i \geq j \), let \( B(i, j) \) denote \( \bigcup_{k=j}^{i} B_k \). For a given itemset, \( I \), let \( f_I(i, j) \) denote the frequency of \( I \) in \( B(i, j) \) (\( I \) is omitted if clear from context). A logarithmic tilted-time window is used to record frequencies for itemset \( I \). The following frequencies are kept \[ f(n, n); f(n - 1, n - 1); f(n - 2, n - 3); f(n - 4, n - 7), \ldots \] The ratio \( r \) between the size of two neighbor tilted-time windows reflects the growth rate of window size, which usually should be larger than 1. The above example illustrates a logarithmic tilted-time window with ratio of 2. Note that there are \( \lceil \log_2(n) \rceil + 1 \) frequencies. So even for a very large number of batches, the maximum number of frequencies is reasonable (e.g., \( 10^9 \) batches requires 31 frequencies). However, in a logarithmic tilted-time window, intermediate buffer windows need to be maintained. These intermediate windows will replace or be merged with tilted-time windows when they are full. 3.4.3 Logarithmic Tilted-Time Window Updating Given a new batch of transactions \( B \), we describe how the logarithmic tilted-time window for \( I \) is updated. First, replace \( f(n, n) \), the frequency at the finest level of time granularity (level 0), with \( f(B) \) and shift \( f(n, n) \) back to the next finest level of time granularity (level 1). \( f(n, n) \) replaces \( f(n - 1, n - 1) \) at level 1. Before shifting \( f(n - 1, n - 1) \) back to level 2, check if the intermediate window for level 1 is full. If not, \( f(n - 1, n - 1) \) is not shifted back; instead it is placed in the intermediate window and the algorithm stops (in the example in the previous sub-section, the intermediate window for all levels is empty). If the intermediate window is full (say with a frequency \( f \)), then \( f(n - 1, n - 1) + f \) is shifted back to level 2. This process continues until shifting stops. Consider the following example over batches $B_1, \ldots, B_8$. The tilted-time window initially looks like $$f(8, 8); f(7, 7); f(6, 5); f(4, 1).$$ $f(8, 8)$ resides in the window for granularity level 0, $f(7, 7)$ for level 1, $f(6, 5)$ for level 2, $f(4, 1)$ for level 3. The intermediate windows at each level are empty and thus not shown. Upon arrival of $B_9$ we update the tilted-time window $$f(9, 9); f(8, 8)[f(7, 7)]; f(6, 5); f(4, 1).$$ $f(9, 9)$ replaces $f(8, 8)$ at level 0 which is shifted back to level 1 replacing $f(7, 7)$. Since the intermediate window for level 1 is empty, $f(7, 7)$ is put into the window and the shifting stops ([…] denotes an intermediate window). Upon arrival of $B_{10}$, updating requires several steps. First, we replace $f(9, 9)$ by $f(10, 10)$ and shift $f(9, 9)$ back. The intermediate window at level 1 is full, so the frequencies at level 1 are merged (producing $f(8, 7) = f(8, 8) + f(7, 7)$). $f(8, 7)$ is shifted back to level 2 replacing $f(6, 5)$. Since the intermediate window at that level is empty, $f(6, 5)$ is put into the intermediate window and the shifting stops. The result is $$f(10, 10); f(9, 9); f(8, 7)[f(6, 5)]; f(4, 1).$$ Upon arrival of $B_{11}$ we update and get $$f(11, 11); f(10, 10)[f(9, 9)]; f(8, 7)[f(6, 5)]; f(4, 1).$$ Finally, upon arrival of $B_{12}$ we get $$f(12, 12); f(11, 11); f(10, 9); f(8, 5)[f(4, 1)].$$ Notice that only one entry is needed in intermediate storage at any granularity level. Hence, the size of the tilted-time window can grow no larger than $2[\log_2(N)] + 2$ where $N$ is the number of batches seen thus far in the stream. There are two basic operations in maintaining logarithmic tilted-time windows: One is frequency merging; and the other is entry shifting. For $N$ batches, we would like to know how many such operations need to be done for each pattern. The following claim shows the amortized number of shifting and merging operations need to be done, which shows the efficiency of logarithmic scale partition. For any pattern, the amortized number of shifting and merging operations is the total number of such operations performed over $N$ batches divided by $N$. **Claim 3.4.1** In the logarithmic tilted-time window updating, the amortized number of shifting and merging operations for each pattern is $O(1)$. ### 3.5 Minimum Support Let $t_0, \ldots, t_n$ be the tilted-time windows which group the batches seen thus far in the stream, where $t_n$ is the oldest (be careful, this notation differs from that of the $B'$s in the previous section). We denote the window size of \( t_i \) (the number of transactions in \( t_i \)) by \( w_i \). Our goal is to mine all frequent itemsets whose supports are larger than \( \sigma \) over period \( T = t_k \cup t_{k+1} \cup \ldots \cup t_{k'} \) (\( 0 \leq k \leq k' \leq n \)). The size of \( T \) is \( W = w_k + w_{k+1} + \ldots + w_{k'} \). If we maintained all possible itemsets in all periods no matter whether they were frequent or not, this goal could be met. However, this will require too much space, so we only maintain \( f_I(t_0), \ldots, f_I(t_{m-1}) \) for some \( m \) (\( 0 \leq m \leq n \)) and drop the remaining tail sequences of tilted-time windows. Specifically, we drop tail sequences \( f_I(t_m), \ldots, f_I(t_n) \) when the following condition holds: \[ \forall i, m \leq i \leq n, f_I(t_i) < \sigma w_i \quad \text{and} \quad \sum_{j=0}^{i} f_I(t_j) < \epsilon \sum_{j=0}^{i} w_j. \tag{3.1} \] As a result, we no longer have an exact frequency over \( T \), rather an approximate frequency \( \hat{f}_I(T) = \sum_{i=m}^{\min(m-1, k)} f_I(t_i) \) if \( m > k \) and \( \hat{f}_I(T) = 0 \) if \( m \leq k \). The approximation is less than the actual frequency \[ f_I(T) - \epsilon W \leq \hat{f}_I(T) \leq f_I(T). \tag{3.2} \] Thus if we deliver all itemsets whose approximate frequency is larger than \((\sigma - \epsilon)W\), we will not miss any frequent itemsets in period \( T \) (15 discussed the landmark case). However, we may return some itemsets whose frequency is between \((\sigma - \epsilon)W\) and \(\sigma W\). This is reasonable when \( \epsilon \) is small. Based on inequality (3.2), we draw the following claim that the pruning of the tail of a tilted-time window table does not compromise our goal. **Claim 3.5.1** Consider itemset \( I \). Let \( m \) be the minimum number satisfying the condition (3.1). We drop the tail frequencies from \( f_I(t_m) \) to \( f_I(t_n) \). For any period \( T = t_k \cup \ldots \cup t_{k'} \) (\( 0 \leq k \leq k' \leq n \)), if \( f_I(T) \geq \sigma W \), then \( \hat{f}_I(T) \geq (\sigma - \epsilon)W \). The basic idea of Claim 3.5.1 is that if we prune \( I \)'s tilted-time window table to \( t_0, \ldots, t_{m-1} \), then we can still find all frequent itemsets (with support error \( \epsilon \)) over any user-defined time period \( T \). We call this pruning tail pruning. Itemsets and their tilted-time window tables are maintained in the **FP-stream** data structure. When a new batch \( B \) arrives, mine the itemsets from \( B \) and update the **FP-stream** structure. For each \( I \) mined in \( B \), if \( I \) does not appear in the structure, add \( I \) if \( f_I(B) \geq \epsilon |B| \). If \( I \) does appear, add \( f_I(B) \) to \( I \)'s table and then do tail pruning. If all of the windows are dropped, then drop \( I \) from **FP-stream**. This algorithm will correctly maintain the **FP-stream** structure, but not very efficiently. We have the following anti-monotone property for the frequencies recorded in tilted-time window tables. **Claim 3.5.2** Consider itemsets \( I \subseteq I' \) which are both in the **FP-stream** structure at the end of a batch. Let \( f_I(t_0), f_I(t_1), \ldots, f_I(t_k) \) and \( f_{I'}(t_0), f_{I'}(t_1), \ldots, f_{I'}(t_k) \) be --- 1Maintaining only frequent tilted-time window entries will not work. As the stream progresses, infrequent entries may be needed to account for itemsets going from infrequent to frequent. the entries maintained in the tilted-time window tables for $I$ and $I'$, respectively. The following statements hold. 1. $k \geq l$. 2. $\forall i, 0 \leq i \leq l, f_I(t_i) \geq f_{I'}(t_i)$. Claim 3.5.2 shows the property that the frequency of an itemset should be equal to or larger than the support of its supersets still holds under the framework of approximate frequency counting and tilted-time window scenario. Furthermore, the size of the tilted-time window table of $I$ should be equal to or larger than that of its supersets. This claim allows for some pruning in the following way. If $I$ is found in $B$ but is not in the FP-stream structure, then by Claim 3.5.2 part 1, no superset is in the structure. Hence, if $f_I(B) < \epsilon |B|$, then none of the supersets need be examined. So the mining of $B$ can prune its search and not visit supersets of $I$. We call this type of pruning Type I Pruning. By Claim 3.5.1 and 3.5.2, we conclude the following anti-monotone property which can help in efficiently cutting off infrequent patterns. **Claim 3.5.3** Consider a pattern $I \subseteq I'$, the following statements hold. 1. if the tail frequencies $f_I(t_m) \ldots f_I(t_n)$ can be safely dropped based on Claim 3.5.1, then $I'$ can safely drop any frequency among $f_{I'}(t_m) \ldots f_{I'}(t_n)$ if it has. 2. if all the frequencies $f_I(t_0) \ldots f_I(t_n)$ can be safely dropped based on Claim 3.5.1, then $I'$ together with all its frequencies can be safely dropped. Claim 3.5.3 part 2 essentially says that if all of $I$’s tilted-time window table entries are pruned (hence $I$ is dropped), then any superset will also be dropped. We call this type of pruning Type II Pruning. ### 3.6 Algorithm In this section, we describe in more detail the algorithm for constructing and maintaining the FP-stream structure. In particular we incorporate the pruning techniques into the high-level description of the algorithm given in the previous section. The update to the FP-stream structure is bulky, done only when enough incoming transactions have arrived to form a new batch $B_i$. The algorithm treats the first batch differently from the rest as an initialization step. As the transactions for $B_1$ arrive, the frequencies for all items are computed, and the transactions are stored in main memory. An ordering, $f$List, is created in which items are ordered by decreasing frequencies (just as done in [10]). This ordering remains fixed for all remaining batches. Once all the transactions for $B_1$ have arrived (and stored in memory), the batch in memory is scanned creating an FP-tree pruning all items with frequency less than $\epsilon |B_1|$. Finally, an FP-stream structure is created by mining all $\epsilon$-frequent itemsets from the FP-tree (the batch in memory and transaction FP-tree are discarded). All the remaining batches $B_i$, for $i \geq 2$, are processed according to the algorithm below. **Algorithm 1 (FP-streaming)** *(Incremental update of the FP-stream structure with incoming stream data)* **INPUT:** (1) An FP-stream structure, (2) a min-sup support threshold, $\sigma$, (3) an error rate, $\epsilon$, and (4) an incoming batch, $B_i$, of transactions (these actually are arriving one at a time from a stream), (5) an item ordering $f_{\text{list}}$. **OUTPUT:** The updated FP-stream structure. **METHOD:** 1. Initialize the FP-tree to empty. 2. Sort each incoming transaction $t$, according to $f_{\text{list}}$, and then insert it into the FP-tree without pruning any items. 3. When all the transactions in $B_i$ are accumulated, update the FP-stream as follows. (a) Mine itemsets out of the FP-tree using FP-growth algorithm in [10] modified as below. For each mined itemset, $I$, check if $I$ is in the FP-stream structure. If $I$ is in the structure, do the following. i. Add $f_I(B)$ to the tilted-time window table for $I$ as described in Section 3.4.3. ii. Conduct tail pruning. iii. If the table is empty, then FP-growth stops mining supersets of $I$ (Type II Pruning). Note that the removal of $I$ from the FP-stream structure is deferred until the scanning of the structure (next step). iv. If the table is not empty, then FP-growth continues mining supersets of $I$. If $I$ is not in the structure and if $f_I(B) \geq \epsilon|B|$, then insert $I$ into the structure (its tilted-time window table will have only one entry, $f_I(B_i)$). Otherwise, FP-growth stops mining supersets of $I$ (Type I Pruning). (b) Scan the FP-stream structure (depth-first search). For each itemset $I$ encountered, check if $I$ was updated when $B$ was mined. If not, then insert 0 into $I$’s tilted-time window table ($I$ did not occur in $B$).2 Prune $I$’s table by tail pruning. Once the search reaches a leaf, if the leaf has an empty tilted-time window table, then drop the leaf. If there are any siblings of the leaf, continue the search with them. If there were no siblings, then return to the parent and continue the search with its siblings. Note that if all of the children of the parent were dropped, then the parent becomes a leaf node and might be dropped. \[\blacksquare\] ## 3.7 Performance Study and Experiments In this section, we report our performance study. We describe first our experimental set-up and then our results. --- 2By recording some additional time-stamp information, these zero tilted-time window entries could be dropped. However, in the interests of simplicity, we did not do so and leave it for future work. 3.7.1 Experimental Set-Up Our algorithm was written in C and compiled using gcc with the -lm switch. All of our experiments are performed on a SUN Ultra-5 workstation using a 333 MHz Sun UltraSPARC-IIi processor, 512 MB of RAM, and 1350 MB of virtual memory. The operating system in use was SunOS 5.8. All experiments were run without any other users on the machine. The stream data was generated by the IBM synthetic market-basket data generator, available at “www.almaden.ibm.com/cs/quest/syndata.html/#assocSynData” (managed by the Quest data mining group). In all the experiments 3M transactions were generated using 1K distinct items. The average number of items per transaction was varied as described below. The default values for all other parameters of the synthetic data generator were used (i.e., number of patterns 10000, average length of the maximal pattern 4, correlation coefficient between patterns 0.25, and average confidence in a rule 0.75). The stream was broken into batches of size 50K transactions and fed into our program through standard input. The support threshold \( \sigma \) was varied (as described below) and \( \epsilon \) was set to 0.1\( \sigma \). Note that the underlying statistical model used to generate the transactions does not change as the stream progresses. We feel that this does not reflect reality well. In reality, seasonal variations may cause the underlying model (or parameters of it) to shift in time. A simple-minded way to capture some of this shifting effect is to periodically, randomly permute some item names. To do this, we use an item mapping table, \( M \). The table initially maps all item names to themselves (i.e., \( M(i) = i \)). However, for every five batches 200 random permutations are applied to the table\(^4\). 3.7.2 Experimental Results We performed two sets of experiments. In the first set of experiments, \( \sigma \) was fixed at 0.005 (0.5 percent) and \( \epsilon \) at 0.0005. In the second set of experiments \( \sigma \) was fixed at 0.0075 and \( \epsilon \) at 0.00075. In both sets of experiments three separate data sets were fed into the program. The first had an average transaction length 3, the second 5, and the third 7. At each batch the following statistics were collected: the total number of seconds required per batch (TIME),\(^5\) the size of the FP-stream structure at the end of each batch in bytes (SIZE),\(^6\) the total number of itemsets held in the FP-stream structure at the end of the batch (NUM ITEMSETS), and the average length of an itemset in the FP-stream at the end of each batch (AVE LEN). In all graphs presented the x axis represents the batch number. Moreover “Support” is used to denote \( \sigma \). Figures 3.6 and 3.7 show TIME and SIZE results, respectively. In each figure the top graph shows the results for average transaction length 3, the middle one shows --- \( ^{3} \)Not all 3M transactions are processed. In some cases only 41 batches are processed (2.05M transactions), in other cases 55 batches (2.75M transactions). \( ^{4} \)A random permutation of table entries \( i \) and \( j \) means that \( M(i) \) is swapped with \( M(j) \). When each transaction \( \{i_1, \ldots, i_k\} \) is read from input, before it is processed, it is transformed to \( \{M(i_1), \ldots, M(i_k)\} \). \( ^{5} \)Includes the time to read transactions from standard input. \( ^{6} \)Does not include the temporary FP-tree structure used for mining the batch. average transaction length 5, and the bottom one shows average transaction length 7. Figure 3.6: **FP-stream** time requirements As expected, the item permutation causes the behavior of the algorithm to jump at every five batches. But, stability is regained quickly. In general, the time and space requirements of the algorithm tend to stabilize or grow very slowly as the stream progresses (despite the random permutations). For example, the time required with average transaction length 5 and support 0.0075 (middle graph figure 3.6) seems to stabilize at 50 seconds with very small bumps at every 5 batches. The space required (middle graph figure 3.7) seems to stabilize at roughly 350K with small bumps. The stability results are quite nice as they provide evidence that the algorithm can handle long data streams. The overall space requirements are very modest in all cases (less than 3M). This can easily fit into main memory. To analyze the time requirements, first recall that the algorithm is to be used in a batch environment. So, we assume that while the transactions are accumulating for a batch, updates to the **FP-stream** structure from the previous batch can be commencing. The primary requirement, in our opinion, is that the algorithm not fall behind the stream. In other words, as long as the **FP-stream** structure can be updated before the next batch of transactions is processed, the primary requirement is met. Consider the case of average transaction length three and $\sigma = 0.0075$ (top graph in figure 3.6). The time stabilizes to roughly 25 seconds per batch. Hence, the algorithm can handle a stream with arrival rate 2000 transaction per second (batch size divided by time). This represents the best case of our experiments. In the worst case (average transaction length 7 and \( \sigma = 0.0075 \)) the rate is roughly 180 transactions per second. While this rate is not as large as we would like, we feel that considerable improvement can be obtained since the implementation is currently simple and straight-forward with no optimizations. In some circumstances it is acceptable to only mine small itemsets. If the assumption is made that only small itemsets are needed, then the algorithm can prune away a great deal of work. Figure 3.8 shows the time performance of the algorithm when the length of the itemsets mined in bounded by two. We see that the times for average transaction length 3 (figure 3.8 top graph) are not much smaller than those where all itemsets were mined (figure 3.6 top graph). But the difference is significant for average transaction length 7. Here the algorithm with itemsets of length bounded by two at support 0.005 can handle a stream with arrival rate 556 transactions per second (the unbounded itemset lengths algorithm could handle a rate of 180). An interesting observation can be made concerning the “spikes” and “troughs” in figures 3.6 and 3.7. Considering SIZE we see that the random permutations cause a narrow trough (drop) in space usage. We conjecture that the permutations cause some itemsets in the tree to be dropped due to a sharp decrease in their frequency. Considering TIME we see that the permutations cause a narrow spike (increase) in the top graph at both support thresholds. In the middle graph the spiking behavior persists for threshold 0.0075 but switches to troughs for threshold 0.005. Finally, in the bottom Figure 3.8: FP-stream time requirements—itemset lengths mined are bounded by two graph, troughs can be seen for both thresholds. The switching from spikes to troughs is an interesting phenomena. As of yet we do not know its cause but do put forth a conjecture. When an item permutation occurs, many itemsets that appear in the FP-stream structure no longer appear in the new batch and many itemsets that do not appear in the structure appear in the new batch. This results in two competing factors: (1) mining the batch requires less work because itemsets in the structure that do not appear in the batch need not be updated; and (2) mining the batch requires more work because itemsets not in the structure that were sub-frequent in the current batch need be added. When the average transaction length is small (say 3), condition (2) dominates—resulting in a spike. When it is large (say 7), condition (1) dominates—resulting in a trough. Finally, we describe some results concerning the nature of the itemsets in the FP-stream structure. Figures 3.9 and 3.10 show the average itemset length and the total number of itemsets, respectively.\footnote{The maximum itemset length was between 8 and 11 in all experiments.} Note that while the average itemset length does not seem to increase with average transaction length, the number of itemsets does. This is consistent with our running the Apriori program of C. Borgelt\footnote{fuzzy.cs.uni-magdeburg.de/ borgelt/software.html/#assoc} on two datasets consisting of 50K transactions, 1K items, and average transaction lengths 5 and 7, respectively. The support threshold in each case was 0.0005 (corresponding to \( \epsilon \) in our \( \sigma = 0.005 \) experiments). The itemsets produced by Apriori should be exactly the same as those in the FP-stream after the first batch (the leftmost point in middle and bottom graphs in figure 3.10). We observed that the make-up of the itemset lengths from Apriori was nearly the same for both datasets: \( \approx 3\% \) size one, \( \approx 33\% \) size two, \( \approx 23\% \) size three, \( \approx 18\% \) size four, \( \approx 12\% \) size five, \( \approx 7\% \) size six, \( \approx 3\% \) size seven, and \( \approx 1\% \) sizes eight, nine, and ten combined. ### 3.8 Time Fading Framework In the previous discussion, we introduced natural and logarithmic tilted-time window partitions. Both of them give finer granularity to the recent and coarser granularity to the past. However, they do not discount the support of past transactions. In order to discount the past transactions, we introduce a fading factor \( \phi \). Suppose we have fixed sized batches \( B_1, B_2, \ldots, B_n \), where \( B_n \) is the most current batch and \( B_1 \) the oldest. For \( i \geq j \), let \( B(i, j) \) denote \( \bigcup_{k=j}^{i} B_k \). For \( B(i, j) \), the actual window size is \( \sum_{k=j}^{i} |B_k| \). In a fading framework, the faded window size for \( B(i, j) \) is \( \sum_{k=j}^{i} \phi^{i-k} |B_k| \) and its faded support is \( \sum_{k=j}^{i} \phi^{i-k} f_I(B_k) \). We do not change Algorithm 1, that means, we still drop infrequent patterns whose support is less than \( \epsilon \). Assume the real faded sup- Figure 3.10: FP-stream total number of itemsets Port of $I$ for $B(i,j)$ is $f_I = \sum_{k=j}^{i} \theta^{i-k} f_I(B_k)$, the approximate support we get for $I$ is $\hat{f}_I$, then we have $$f_I - \epsilon \sum_{k=j}^{i} \theta^{i-k} |B_k| \leq \hat{f}_I \leq f_I$$ Inequality (3.3) is consistent with inequality (3.2) if actual support is replaced with faded support and the actual window size is replaced with the faded window size. When we merge two tilted-time windows, $t_i$ and $t_{i+1}$, the merged frequency is $f_I(t_i) + \hat{f}_I(t_{i+1}) \times \theta^{l_i}$, where $l_i$ is the number of batches contained in tilted-time window $t_i$. As we can see, our tilted-time window framework also works for time fading model by changing the definition of merging operation. The claims discussed before also hold for the time fading model. ### 3.9 Broader Stream Mining Issues In the last few years a great deal of work has been conducted on the managing and mining of stream data (see [3] for a good survey). One of the broader issues addressed is the development of systems for processing queries on data streams. For example, the data stream management system (DSMS) at Stanford aims to serve the analogous role of a relational DBMS on data streams. Also, the issue of stream data mining has been addressed by extending static data mining models to a stream environment: classification [7, 11], clustering [9, 16], and frequent itemset discovery [15]. Dong et al. [8] argue that “online mining of the changes in data streams is one of the core issues” in stream data mining and that the previously mentioned studies have not addressed this issue substantially. Dong et al. describe three categories of research problems: modeling and representation of changes, mining methods, and interactive exploration of changes. Modeling and representation of changes refers to the development of query languages for specifying mining queries on changes in data streams and the development of methods of summarizing and representing the discovered changes. Mining methods refers to the development of efficient algorithms for evaluating specific change mining queries as well as general queries specified by a change mining query language. Finally, interactive exploration of changes refers to the development of methods to support a user’s evaluation of changes. For example, a user may initially want to monitor changes at a high level, then more closely inspect the details of interesting high-level changes. We envision the FP-stream model as a foundation upon which frequent itemset change mining queries can be answered. For example, the change in frequency of itemsets across multiple time granularities can be computed. Acknowledgments The authors express their thanks to An-Hai Doan for his constructive comments on a draft of the paper. The work was supported in part by U.S. National Science Foundation (NSF) IIS-02-09199, the Univ. of Illinois, and an IBM faculty award. C. Giannella thanks the NSF for their support through grant IIS-0082407. Bibliography
{"Source-Url": "http://www.cs.nmsu.edu/~cgiannel/NGDMWeb.Paper.pdf", "len_cl100k_base": 10374, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 60776, "total-output-tokens": 12886, "length": "2e13", "weborganizer": {"__label__adult": 0.00035834312438964844, "__label__art_design": 0.0003521442413330078, "__label__crime_law": 0.0006532669067382812, "__label__education_jobs": 0.0015888214111328125, "__label__entertainment": 0.00010514259338378906, "__label__fashion_beauty": 0.00020897388458251953, "__label__finance_business": 0.000606536865234375, "__label__food_dining": 0.00037479400634765625, "__label__games": 0.0011091232299804688, "__label__hardware": 0.0021533966064453125, "__label__health": 0.0007791519165039062, "__label__history": 0.0004391670227050781, "__label__home_hobbies": 0.00021660327911376953, "__label__industrial": 0.001033782958984375, "__label__literature": 0.0003380775451660156, "__label__politics": 0.0002932548522949219, "__label__religion": 0.0005979537963867188, "__label__science_tech": 0.376708984375, "__label__social_life": 0.00014400482177734375, "__label__software": 0.0299072265625, "__label__software_dev": 0.5810546875, "__label__sports_fitness": 0.00032591819763183594, "__label__transportation": 0.00048732757568359375, "__label__travel": 0.00020372867584228516}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45777, 0.03171]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45777, 0.53126]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45777, 0.88836]], "google_gemma-3-12b-it_contains_pii": [[0, 1268, false], [1268, 4536, null], [4536, 7908, null], [7908, 11294, null], [11294, 13527, null], [13527, 14241, null], [14241, 17270, null], [17270, 19842, null], [19842, 23346, null], [23346, 26394, null], [26394, 28899, null], [28899, 32385, null], [32385, 34151, null], [34151, 35796, null], [35796, 37420, null], [37420, 39024, null], [39024, 40162, null], [40162, 42085, null], [42085, 44064, null], [44064, 45777, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1268, true], [1268, 4536, null], [4536, 7908, null], [7908, 11294, null], [11294, 13527, null], [13527, 14241, null], [14241, 17270, null], [17270, 19842, null], [19842, 23346, null], [23346, 26394, null], [26394, 28899, null], [28899, 32385, null], [32385, 34151, null], [34151, 35796, null], [35796, 37420, null], [37420, 39024, null], [39024, 40162, null], [40162, 42085, null], [42085, 44064, null], [44064, 45777, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45777, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45777, null]], "pdf_page_numbers": [[0, 1268, 1], [1268, 4536, 2], [4536, 7908, 3], [7908, 11294, 4], [11294, 13527, 5], [13527, 14241, 6], [14241, 17270, 7], [17270, 19842, 8], [19842, 23346, 9], [23346, 26394, 10], [26394, 28899, 11], [28899, 32385, 12], [32385, 34151, 13], [34151, 35796, 14], [35796, 37420, 15], [37420, 39024, 16], [39024, 40162, 17], [40162, 42085, 18], [42085, 44064, 19], [44064, 45777, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45777, 0.0]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
d78bb7cee09d178d48b74b8127f2e6ba76a4d79b
Task Variant Allocation in Distributed Robotics José Cano*, David R. White†, Alejandro Bordallo*, Ciaran McCreesh†, Patrick Prosser†, Jeremy Singer† and Vijay Nagarajan* ‡School of Informatics, University of Edinburgh, UK †School of Computing Science, University of Glasgow, UK Abstract—We consider the problem of assigning software processes (or tasks) to hardware processors in distributed robotics environments. We introduce the notion of a task variant, which supports the adaptation of software to specific hardware configurations. Task variants facilitate the trade-off of functional quality versus the requisite capacity and type of target execution processors. We formalise the problem of assigning task variants to processors as a mathematical model that incorporates typical constraints found in robotics applications; the model is a constrained form of a multi-objective, multi-dimensional, multiple-choice knapsack problem. We propose and evaluate three different solution methods to the problem: constraint programming, a constructive greedy heuristic and a local search metaheuristic. Furthermore, we demonstrate the use of task variants in a real instance of a distributed interactive multi-agent navigation system, showing that our best solution method (constraint programming) improves the system’s quality of service, as compared to the local search metaheuristic, the greedy heuristic and a randomised solution, by an average of 16%, 41% and 56% respectively. I. INTRODUCTION Modern robotics systems are increasingly distributed, heterogeneous and collaborative, incorporating multiple independent agents that communicate via message passing and distributed protocols. A distributed approach can offer desirable qualities such as improved performance. Heterogeneity refers to the type and amount of hardware resources (e.g. sensors, CPU capacity) available on each agent in the system. In such systems, the efficient allocation of software processes (referred to as tasks) to hardware processors is of paramount importance in ensuring optimality. Previous works [13, 14] generally take an approach that considers only a fixed set of tasks, equivalent to a “one size fits all” architecture, limiting the ability of a system to adapt to different hardware configurations, and reducing the opportunities for optimisation. Instead, we advocate the development of systems based on the selection and allocation of what we term “task variants”. Task variants are interchangeable software components that offer configurable levels of quality of service (QoS) with a corresponding difference in the amount and/or type of computing resources they demand; such variants naturally arise in many scenarios, and often deployed systems consist of a particular subset of variants that have been implicitly chosen by a system architect. For example, consider alternative feature detection algorithms to solve a common task in a robotics vision pipeline: different algorithms provide increasingly sophisticated recognition methods but at the cost of increasing CPU load. Similarly, a variant may offer accelerated processing by targeting specialised hardware (e.g. GPUs). Currently, the crucial step of selecting and allocating such task variants is typically performed using ad-hoc methods, which provide no guarantee of optimality and may thus lead to inefficient allocation. In this paper, we take a more systematic approach. We formalise the task variant allocation problem and propose three different solution methods that are able to efficiently exploit available resources with the objective of maximising QoS while ensuring system correctness. We focus on distributed heterogeneous robotics systems where variants are naturally available for several tasks. In particular, our work has been driven by a case study, in the form of a distributed system of agents running on ROS [24]. The application implements a framework for inferring and planning with respect to the movement of goal-oriented agents in an interactive multi-agent setup — full details can be found in [4]. There are two types of agents navigating in the same physical space: autonomous robots represented by KUKA youBots [3] and humans. Each agent is pursuing a goal (a specific spatial position in the scenario) while avoiding collisions with other agents, based on online sensor processing and beliefs concerning the latent goals of other agents. Specific tasks are used to accomplish this objective in a distributed fashion. For example, robots infer navigation goals of other agents from network camera feeds, provided by at least one Tracker task — meanwhile humans act independently and are assumed to navigate as a rational goal-oriented agent through the space. Some tasks can be configured via parameter values (e.g. the camera frame rate for the Tracker task) that translate into variants for that task. Each of these variants produces a different level of QoS, which we assume is quantified by an expert system user. Thus, the objective is to select task variants and allocate them to processors so as to maximise the overall QoS while agents reach their goals. The contributions of the paper are as follows: i) we introduce a mathematical model that represents the task variant selection and allocation problem; ii) we propose three different solution methods (constraint programming, local search metaheuristic, greedy heuristic) to the problem; iii) we evaluate and compare the solution methods through simulation; iv) we validate the solution methods in a real-world interactive multi-agent navigation system, showing how our best solution method (constraint programming) clearly outperforms the average QoS of the local search metaheuristic by 16%, the greedy heuristic by 41%, and a random allocation by 56%. To the best of our knowledge, we are the first to address task allocation in the presence of variants in distributed robotics. II. Problem Formulation We now model the problem of task variant allocation in distributed robotics, in a general formulation that also applies to the specifics of our case study. We consider allocation as a constrained type of multi-objective, multi-dimensional, multiple-choice knapsack problem. Whilst instances of these three problems are individually common in the literature [11, 13], the combination is not. In addition, we allow for a number of unusual constraints describing task variants that distinguish this formulation from previous work (e.g. the specific type of hardware required to run a variant). Our formulation of the problem divides cleanly into three parts: the software architecture of the system, including information about task variants; the hardware configuration that is being targeted as a deployment platform; and the constraints and goals of task selection and allocation, which may be augmented by a system architect. A. Software Model A software architecture is defined by a directed graph of tasks, \((T, M)\) where the set of tasks \(T = \{\tau_1, \ldots, \tau_n\}\) and each task \(\tau_i\) is a unit of abstract functionality that must be performed by the system. Tasks communicate through message-passing: edges \(m_{i,j} = (\tau_i, \tau_j) \in M \subseteq T \times T\) are weighted by the ‘size’ of the corresponding message type, defined by a function \(S : m_{i,j} \to \mathbb{N}\); this is an abstract measure of the bandwidth required between two tasks to communicate effectively. Tasks are fulfilled by one or more task variants. Each task must have at least one variant. Different variants of the same task reflect different trade-offs between resource requirements and the QoS provided. Thus a task \(\tau_i\) is denoted as a set of indexed variants: \(\tau_i = \{v^i_1, \ldots, v^i_k\}\). For convenience, we define \(V = \bigcup_i \tau_i\), such that \(V\) is the set of all variants across all tasks. For simplicity, we make the conservative assumption that the maximum message size for a task \(\tau_i\) is the same across all variants \(v^i_j\) of that task, and we use this maximum value when calculating bandwidth usage for any task variant. A given task variant \(v^i_j\) is characterised by its processor utilisation and the QoS it provides, represented by the functions \(U, Q : v^i_j \to \mathbb{N}\). The utilisation of all task variants is expressed normalised to a ‘standard’ processor; the capacity of all processors is similarly expressed. QoS values can be manually (Section V-A) or automatically generated (future work), although this is orthogonal to the problem addressed. B. Hardware Model The deployment hardware for a specific system is modelled as an undirected graph of processors, \((P, L)\) where the set of processors \(P = \{p_1, \ldots, p_n\}\) and each processor \(p_k\) has a given processing capacity defined by a function \(D : p_k \to \mathbb{N}\). A bidirectional network link between two processors \(p_k\) and \(p_m\) is defined as \(l_{k,m} = (p_k, p_m) \in L \subseteq P \times P\), so that each link between processors will support one or more message-passing edges between tasks. The capacity of a link is given by its maximum bandwidth and is defined by a function \(B : l_{k,m} \to \mathbb{N}\). In a particular system instance multiple processors share a single network link, we rely on the system architect responsible for specifying the problem to partition network resources between processors, such as simply dividing it equally between processor pairs. C. Selection and Allocation Problem The problem hence is to find a partial function \(A : V \to P\), that is, an assignment of task variants to processors that satisfies the system constraints (i.e. a feasible solution), whilst maximising the QoS across all tasks, and also maximising efficiency (i.e. minimising the average processor utilisation) across all processors. As \(A\) is a partial function, we must check for domain membership of each task variant, represented as \(dom(A)\), to determine which variants are allocated. We assume that if a processor is not overloaded then each task running on the processor is able to complete its function in a timely manner, hence we defer the detailed scheduling policy to the designer of a particular system. An optimal allocation of task variants, \(A^*\), must maximise the arithmetic mean of QoS across all tasks (the global QoS): \[ \text{max } \frac{1}{n_{\text{tasks}}} \sum_{v^i_j \in dom(A)} Q(v^i_j) \quad (1) \] Whilst minimising the average utilisation across all processors as a secondary goal: \[ \text{min } \frac{1}{n_{\text{proc}}} \sum_{p_k \in P} \sum_{v^i_j \in dom(A) \land A(v^i_j) = p_k} U(v^i_j) \quad (2) \] Exactly one variant of each task must be allocated: \[ \forall \tau_i \in T, \forall v^i_j, v^i_k \in \tau_i \Rightarrow (v^i_j \in dom(A) \land v^i_k \in dom(A)) \implies j = k \quad (3) \] The capacity of any processor must not be exceeded: \[ \forall p_k \in P : \left( \sum_{v^i_j \in dom(A) \land A(v^i_j) = p_k} U(v^i_j) \right) \leq D(p_k) \quad (4) \] The bandwidth of any network link must not be exceeded: \[ \forall l_{q,r} \in L : \left( \sum_{i : A(v^i_j) = p_q \land A(v^i_k) = p_r} S(l_{q,r}) \sum_{m_{i,k} = p_q \land m_{i,k} = p_r} \right) \leq B(l_{q,r}) \quad (5) \] In addition, residence constraints restrict the particular processors to which a given task variant \(v^i_j\) may be allocated, to a subset \(R^j_i \subseteq P\). This is desirable, for example, when requisite sensors are located on a given robot, or because specialised hardware such as a GPU is used by the variant: \[ v^i_j \in dom(A) \implies A(v^i_j) = p_k \in R^j_i \quad (6) \] Coresidence constraints limit any assignment such that the selected variants for two given tasks must always reside on the same processor. In practice, this may be because the latency of a network connection is not tolerable. The set of coresidence constraints is a set of pairs \((\tau_i, \tau_k)\) for which: \[ \forall v^i_j \in \tau_i, \forall v^i_k \in \tau_k : (v^i_j \in dom(A) \land v^i_k \in dom(A)) \implies A(v^i_j) = A(v^i_k) \quad (7) \] III. Solution Methods We now propose and describe our three different centralised approaches to solving the problem of task variant allocation: constraint programming (CP), a greedy heuristic (GH), and local search metaheuristic (LS). These are three broadly representative search techniques from diverse families of solution methods, as outlined by Gulwani [9]. A. Constraint Programming We expressed the problem in MiniZinc 2.0.11 [21], a declarative optimisation modelling language for constraint programming. A MiniZinc model is described in terms of variables, constraints, and an objective. Our model has a variable for each variant, stating the processor it is to be assigned to; since we are constructing a partial mapping, we add a special processor to signify an unassigned variant. Matrices are used to represent the bandwidth of the network and the sizes of messages exchanged between tasks. The model along with the source code can be found online [26]. Most constraints are a direct translation of those in Section II-C although the constraint given by Equation 3 is expressed by saying that the sum of the variants allocated to any given task is one — this natural mapping is why we selected MiniZinc, rather than (for example) encoding to mixed integer programming. The development of a model that allows MiniZinc to search efficiently is key to its success, and we spent some time refining our approach to reduce solution time. There are two objectives to be optimised, and we achieve this by implementing a two-pass method: first the QoS objective is maximised, we parse the results, and then MiniZinc is re-executed after encoding the found optimal value as a hard constraint. The resulting model is then used to solve the task variant allocation problem with the new objective. The full model is too large to list here, but to give a flavour, we show our variables, a constraint, and the first objective: To solve instances, we used the Gecode [7] constraint programming toolkit, which combines backtracking search with specialised inference algorithms. We used the default search rules, and only employ standard toolkit constraints. It addition to being used as an exact solver, Gecode can also run in *anytime* fashion, such that it reports the best solution found so far. Our system reports both the increasingly better solutions produced during the run and any globally optimal result, where found. In our evaluation we consider both the standard mode, which returns the global optimum after an unrestricted runtime (Section V-C), and also this anytime mode that returns the best result found so far (Section V-E). B. Greedy Heuristic Our second solution method is a non-exact greedy algorithm that uses a heuristic developed from an algorithm originally designed for solving a much simpler allocation problem [5]. The procedure is described in Algorithm 1 [1] and attempts to obey constraints, then allocate the most CPU intensive tasks possible to those processors with the greatest capacity. Algorithm 1 Greedy Heuristic ``` 1: P_max = sort processors by max capacity 2: T_max = sort tasks by max variant size 3: # Allocate variants with residency constraints 4: for task in T_max do 5: V_min = sort variants of task by min variant size 6: for variant in V_min do 7: if variant has residency constraints AND task has no variant assigned then 8: Allocate variant to processor from P_variant 9: # Allocate variants with coresidency constraints 10: for task in T_max do 11: if task has coresidency constraints AND task has no variant assigned then 12: Allocate smallest variant to processor from P_max 13: # Allocate remaining variants 14: for task in T_max do 15: if task has no variant assigned then 16: Allocate smallest variant to processor from P_max 17: # Upgrade variants where possible 18: for task in T_max do 19: if sufficient capacity in assigned processor then 20: Allocate larger variant of task 21: end if 22: end if 23: end if 24: end for 25: end if 26: end for 27: end for 28: end for ``` First, the smallest task variants with residency constraints are allocated to processors, beginning with the largest processor if the subset $R^*_i$ for a given task variant $v^*_i$ contains more than one element. Next, the smallest variants of any tasks with coresidency constraints are assigned selecting processors from $P_{max}$. Then, the smallest variants of any remaining, unallocated, tasks are allocated, again preferring processors with more capacity. Finally, the algorithm attempts to substitute smaller variants with larger ones on the same processor. Note that the way in which the next processor (from $R^i_j$, $P_{max}$) or variant is selected must also ensure that allocations will not result in a violation of any previously satisfied constraints. Also note that the greedy heuristic is not guaranteed to find a solution, but if it finds one it is always feasible, i.e. satisfies the system constraints. The ability to provide solutions is greatly determined by any residency and coresidency constraints. C. Local Search Metaheuristic The third algorithm we propose is a simple local search metaheuristic employing random restarts. The process is described by Algorithm 2. Initially, a random assignment is generated by allocating a random variant for each task to a random processor, and all choices are made uniformly random. There is no guarantee a randomly generated allocation will satisfy the constraints of the model, and indeed the search algorithm is not guaranteed to find a feasible solution in general. As there is no way to determine if the global optimum has been found, the algorithm continues to search the space of assignments until a given timeout is reached. The search may find a local optimum, in which case a random restart is used to explore other parts of the search space (lines 6-7). Algorithm 2 Local Search Metaheuristic 1: current ← random assignment 2: while time < timeout do 3: for n in neighbours(current) do 4: if n is superior to current then 5: current ← n 6: if no improvement then 7: current ← random assignment The neighbourhood of a solution in the space of allocations is defined as all those solutions that can be generated by substituting another variant of the same task for one already allocated, or by moving a single variant to a different processor. In order to determine if one solution is preferable to another, a priority ordering amongst the constraints and objectives is established, in order of importance: 1) No processors should be overloaded. 2) The network should not be overloaded. 3) Residency constraints must be satisfied. 4) Coresidency constraints must be satisfied. 5) Average QoS per task should be maximised. 6) Average free capacity per CPU should be maximised. A solution is feasible if the first four constraints are satisfied, after which the search will try to optimise QoS and then reduce processor utilisation to free up capacity. This priority ordering method is preferred over the alternative of a weighted sum objective, an approach found elsewhere in the literature [17]. Weighted sum approaches require the user to define numerical relationships between objectives and constraints, which is a somewhat inelegant approach to this problem. For the same reason, we prefer local search over simulated annealing [25], an algorithm we also experimented with, which relies on a numerical gradient in the constrained objective space. IV. Example Case Study Our case study serves as a specific instantiation of the general model presented, with which we can test our algorithmic solutions in a real system. We first present a “baseline” instance of the system, consisting of a single robot, person, server and camera. This simplified configuration illustrates the system components and the constraints imposed on them. Each agent (robot or human) is pursuing a spatial goal. The application’s overarching QoS metric is a combination of essential requirements (e.g. avoid collisions between agents, minimise travel time to reach target goals), as well as more sophisticated preferences (e.g. minimise close-encounters and hindrance between navigating agents, minimise the time taken to infer the true agent goal). Therefore, task variants must be selected and allocated across available processors with the objective of optimising global QoS based on the selected variants’ individual QoS values. A. Software Architecture Figure 1 shows a high-level diagram representing the software architecture of the case study. It is composed of multiple tasks and their message connections. In the figure, connections are labelled with message frequencies, which can be obtained from the maximum bandwidth requirement described in Section II-A. The QoS values for the variants of a given task represent the proportional benefit of running that task variant; a variant that has a higher QoS, however, would typically incur a higher CPU usage. We rely on an expert system user to estimate QoS values for task variants. We now describe for each task in our case study, the corresponding variants (see Table I for details): - **Tracker**: A component of a distributed person tracking algorithm that fuses multiple-camera beliefs using a particle filter. The variants for this task are based on the input image resolution and the output frame rate given a fixed number of cameras. The higher the output frame rate the more accurate the tracking. - **Experiment**: A small synchronous task that coordinates all robots taking part in the experiment. • Environment: A local processing task required by each robot. This task combines information generated by the local robot, other robots, and elsewhere in the system (i.e. Tracker, Experiment). • Model: An intention-aware model for predicting the future motion of interactively navigating agents, both robots and humans. The variants for this task are based on the number of hypothetical goals considered given a fixed number of agents. A higher number of modelled agent goals will lead to more accurate goal estimates. • Planner: Generates an interactive costmap, which predicts the future motion of all agents with relation to other agents’ motion given their inferred target goals. This costmap is used by the Navigation task for calculating the trajectory to be executed. • AMCL: A task performing localisation relying on laser data and a known map of the environment [23]. The variants of this task vary with the number of particles the monte-carlo localisation may use during navigation, since a larger number increases localisation robustness and accuracy in environments populated with other moving obstacles. We assume the robot moves on average at the preferred speed of 0.3 m/s (min 0.1 m/s, max 0.6 m/s). • Navigation: This task avoids detected obstacles and attempts to plan a path given the interactive costmap of the agents in the environment, ultimately producing the output velocity the robot platform must take. The variants of Navigation depend on the controller frequency, that is, the number of times per second the task produces a command velocity. The higher the frequency, the more reactive and smooth the robot navigation becomes. • YouBot_Core: A core set of ROS packages and nodes that enable the robot to function, for example etherCAT motor connectivity, internal kinematic transformations, and a laser scanner sensor. This task must always run in the corresponding robot (a residence constraint). Finally, it is critical that a robot can execute all of its own tasks, even if only using the least computationally demanding variants. Those tasks are represented within the robot namespace in Figure 1. This is essential to ensure a continued service in periods of network outage, albeit at lower levels of QoS. B. Hardware Architecture The hardware integrating the baseline system is composed of a single network camera and two processors, that is, a robot with onboard processor and a remote server. Robot and server communicate through a wireless network, and camera and server through a wired network. In practice the network bandwidth is currently not a limiting factor, as both networks are dedicated and private in our lab. V. Evaluation In this section, we first describe the results of an empirical characterisation of the baseline system, which is mandatory to evaluate both the solution methods and the case study itself. We then extend this characterisation to define a set of system instances of increasing size and complexity. Having established these benchmark problems, we employ them to evaluate the utility of our solution methods, in two stages. In the first stage, we compare the quality of solutions returned by the three proposed methods to answer the following research questions: - RQ1A. Is it possible to find globally optimal variant selections and allocations using constraint programming? - RQ1B. How well can a straightforward greedy heuristic and the local search metaheuristic perform on this problem, relative to the constraint programming method? - RQ1C. How well do the results produced by the three solution methods translate to deployment on the physical system outlined in Section IV? - RQ1D. How effective are the allocations proposed by our solution methods compared to random allocations? In the second stage, we compare an anytime version of the MiniZinc model solver against the local search metaheuristic, to explore their performance over time. Our research question is as follows: - RQ2. How do local search metaheuristic and “anytime” constraint programming compare in terms of their solutions quality after a given period of run-time? A. System Characterisation We performed an offline characterisation of the baseline system using common monitoring utilities from ROS (e.g. rqt and Linux (e.g. htop). The objective was to measure for each task in the system the following values: i) the average percentage of CPU utilisation required for each variant on each processor, and ii) the average frequency at which messages published by variants are sent to other tasks, along with the bandwidth required for each type of message. Table I summarises the values obtained. Column two represents the number of variants for each task, and column three the value of the parameters that create the task variants (see Section IV-A). The next three columns include the average values of CPU utilisation, frequency and bandwidth for each task variant — note that the maximum values for frequency are shown in Figure 1. The CPU values for the Tracker task assume only one person in the environment. Columns seven and eight show the residence and coresidence constraints for each variant and task respectively. Finally, the last column represents the normalised QoS associated with each task variant, where 100 is the maximum value. Note that we have assigned QoS value “1” to single variant tasks because they have much less impact in the system behaviour, which is reflected in low CPU utilisation values in Table II. The focus of this work is task variant allocation, for which we require QoS values as inputs. Although QoS values were manually generated based on real system measurements, they may be automatically generated, but we leave this for future work. It is worth noting that the user is required to provide QoS values only “once” for each task variant. Therefore, when the system is scaled up by replicating tasks on more robots or cameras, the user is not required to assign new QoS values. TABLE I: Task variants characterisation. <table> <thead> <tr> <th>Task</th> <th>Variants</th> <th>Parameters</th> <th>CPU</th> <th>Freq (Hz)</th> <th>BW (KB/s)</th> <th>Res</th> <th>CoRes</th> <th>QoS</th> </tr> </thead> <tbody> <tr> <td>Experiment</td> <td>1</td> <td>-</td> <td>1</td> <td>10</td> <td>-</td> <td>-</td> <td>-</td> <td>1</td> </tr> <tr> <td>Tracker</td> <td>4</td> <td>Output freq. (25 20 15 10)</td> <td>200</td> <td>160 120 80</td> <td>25 20 15 10</td> <td>2.5 2.0 1.5 1.0</td> <td>server</td> <td>-</td> </tr> <tr> <td>Environment</td> <td>1</td> <td>-</td> <td>1</td> <td>10</td> <td>0.5</td> <td>-</td> <td>-</td> <td>1</td> </tr> <tr> <td>Model</td> <td>3</td> <td>Num. goals (10000 3500 4)</td> <td>59 39 17</td> <td>10 10 10</td> <td>5 5 5</td> <td>-</td> <td>-</td> <td>100 60 20</td> </tr> <tr> <td>Planner</td> <td>1</td> <td>-</td> <td>1</td> <td>10</td> <td>0.5</td> <td>-</td> <td>-</td> <td>1</td> </tr> <tr> <td>AMCL</td> <td>3</td> <td>Particles (3000 500 200)</td> <td>66 41 19</td> <td>2.5 2.5 2.5</td> <td>1 1 1</td> <td>-</td> <td>- Navigation</td> <td>100 75 50</td> </tr> <tr> <td>Navigation</td> <td>3</td> <td>Controller freq. (20 10 2)</td> <td>50 39 25</td> <td>20 10 2</td> <td>1 0.5 0.1</td> <td>-</td> <td>- Planner</td> <td>100 67 33</td> </tr> <tr> <td>Youbot_Core</td> <td>1</td> <td>-</td> <td>16 10</td> <td>0.5 robot</td> <td>-</td> <td>-</td> <td>-</td> <td>1</td> </tr> </tbody> </table> Finally, we specify the characteristics of the hardware used to obtain the measurements. The robot’s on-board processor is an Intel Atom, 2 cores @ 1.6GHz, 2GB RAM. The server’s processor is an Intel i5-3340, Quad Core @ 3.30GHz (Turbo), 16GB RAM. Note that all CPU measurements are normalised to the robot CPU capacity (= 100). From this, we can understand why the Tracker instances (which have a high CPU requirement) can only run in the server, translating into a residence constraint. The networks employed are a wireless 802.11ac network at 300Mbps, and a 1Gbps Ethernet network. B. System Instances In order to obtain more complex instances of the system, we only need to add processors (robots, servers) and/or cameras, allowing the system to cope with a more complex environment and complete more difficult challenges. As these parameters are varied, the total number of tasks and variants change accordingly, but the number of variants for each task is fixed. Table II summarises the set of instances comprising our benchmarks, and the number of tasks and variants generated for each case — note that only one server is used for all cases. TABLE II: System instances considered <table> <thead> <tr> <th>Instance</th> <th>Processors</th> <th>Robots</th> <th>Cameras</th> <th>Tasks</th> <th>Variants</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>2</td> <td>1</td> <td>1</td> <td>8</td> <td>17</td> </tr> <tr> <td>2</td> <td>2</td> <td>1</td> <td>2</td> <td>9</td> <td>21</td> </tr> <tr> <td>3</td> <td>2</td> <td>1</td> <td>3</td> <td>10</td> <td>25</td> </tr> <tr> <td>4</td> <td>3</td> <td>2</td> <td>1</td> <td>14</td> <td>29</td> </tr> <tr> <td>5</td> <td>3</td> <td>2</td> <td>2</td> <td>15</td> <td>33</td> </tr> <tr> <td>6</td> <td>3</td> <td>2</td> <td>3</td> <td>16</td> <td>37</td> </tr> <tr> <td>7</td> <td>3</td> <td>2</td> <td>4</td> <td>20</td> <td>41</td> </tr> <tr> <td>8</td> <td>4</td> <td>3</td> <td>2</td> <td>21</td> <td>45</td> </tr> <tr> <td>9</td> <td>4</td> <td>3</td> <td>3</td> <td>22</td> <td>49</td> </tr> <tr> <td>10</td> <td>4</td> <td>3</td> <td>4</td> <td>23</td> <td>53</td> </tr> </tbody> </table> C. Simulation Results: QoS Analysis We now analyse and compare the QoS values of solutions provided by the three proposed methods (since these are simulation results, we call them expected values). Remember that the allocation of more powerful variants translates into higher global QoS values, and strongly correlates with improved overall system behaviour. For example, switching from the least to most powerful variant of the Tracker task (QoS values 40 and 100, Table I) actually provides more accurate and faster tracking of people in the environment. This in turn provides the Planner and Model tasks with better data, improving the robots ability to navigate (e.g. avoiding collisions). We execute Python programs implementing the three proposed methods for the instances described in Table II. Answering RQ1A, we found that constraint programming finds the globally optimal solution for all instances analysed. In other words, for each instance this method provides the allocation of task variants to processors with the best possible average QoS and minimum CPU usage. Since constraint programming provides the best possible QoS, we normalise the QoS provided by the greedy and local search methods to the optimum. Figure 2 shows results comparing the three methods — note that values for LS are actually the average of three independent runs considering the amount of time used by CP. Therefore, answering RQ1B, we observe that LS and GH achieve an average of 15% and 47% less QoS than CP respectively. Since we maintain the server capacity (= 400) across all instances analysed, the problem becomes more constrained as the total number of variants increases. As an example, CP solves Instance 1 allocating the most powerful variants for all tasks. However for Instance 10, all tasks need to use less powerful variants in order to satisfy the CPU capacity constraint (e.g. the four Tracker tasks use the least powerful). D. Analysis of Case Study Behaviour Having obtained the simulation results, our next step is to validate that the expected QoS values obtained via simulation match the behaviour of the real system. To do this, we performed experiments for instances 1-6 from Table II in our case study environment. For each instance, we configured the allocation of task variants to processors computed by the solution methods — note that only a single human agent is present in the environment for all experiments. Then, the measured QoS value for each instance and method is obtained by applying the following formula: $$QoS_{measured} = \sum_{i \in T} QoS_{v_{ij}} \times \frac{F_{vj}^o}{F_{vj}} \quad (8)$$ where $QoS_{v_{ij}}$ is the expected QoS value for task variant $v_{ij}$ as predicted by our solution methods, $F_{vj}^o$ is the observed frequency of messages produced by $v_{ij}$ on the real system and $F_{vj}$ is the expected frequency associated with $v_{ij}$ (Table I). These two frequencies can differ due to overloaded processors (for infeasible solutions) and/or approximation errors in the system characterisation. Therefore, this frequency ratio determines the effectiveness of a task variant in the real system. Figure 3 shows the results. The black error bar for each column denotes the difference between measured QoS (top of column) obtained with Equation 8 and expected QoS (error bar upper end) obtained by simulation. Answering RQ1C, the measured QoS values for local search, constraint programming and the greedy heuristic only deviate by 8%, 7% and 5% on average respectively from the expected values. This result validates the accuracy of our methodology. Finally, we also examined the system behaviour considering random allocations of task variants to processors. Figure 5 also includes these results (RA), where each bar actually corresponds to the average QoS of three randomly generated allocations. Answering RQ1D, we see how the measured QoS values for random allocations deviate much more from the expected ones, by an average of 22%, than those for the proposed solution methods. The reason is that our solution methods produced feasible allocations for the six instances analysed (i.e. satisfying system constraints), thus differences are only due to approximation errors in the system characterisation. However, some of the random allocations produced infeasible solutions, which translated into overloaded processors and therefore larger differences with the expected values. In summary, constraint programming improves by 16%, 41%, and 56% on average over local search metaheuristic, greedy heuristic and random allocations respectively. E. Anytime Approaches We now consider task allocation using the MiniZinc model solver Gecode and the local search metaheuristic as anytime algorithms, where the best allocation currently known can be returned at any point during their execution. This is particularly important if we are to allocate variants in larger systems or at run-time. The two algorithms approach the problem differently, because using Gecode requires a two-pass procedure where each objective is optimised in turn, whereas the local search metaheuristic attempts to optimise both objectives simultaneously. Therefore, the relative performance of the two algorithms is of interest. The experiments were performed on a 2.7GHz Intel Core i5 iMac with 16GB RAM, which gives similar solution times to those presented above. We first ran Gecode to completion against each benchmark instance. We selected the smallest two instances that resulted in significant runtimes, which were Instance 7 (approximately 25 seconds) and 8 (550 seconds). We then executed Gecode and local search with increasing timeout values, to evaluate how the solutions they found improved over time. Figures 4a and 4b show typical results. The graphs show two objectives: firstly, the Quality of Service objective as defined by Equation 1 and secondly the Utilisation objective as defined by Equation 2. Each intermediary result is from an independent run of the algorithms, avoiding the problem of autocorrelation. All results in Figure 4 are normalised to ideal (“1”), which represents: i) for QoS values, the QoS of using the most powerful variants; ii) for CPU utilisation, unutilised processors (free capacity of 100%). These graphs illustrate a clear trend that answers RQ2: MiniZinc produces superior results in the same amount of time, and is our preferred anytime solution method. Promisingly, it also produces high quality results within a short timeframe, which may enable dynamic optimisation in the future and also increases our confidence in its ability to scale to larger systems. Local search produces feasible solutions with better utilisation values (more free capacity) in a short amount of time, however our case study architects are primarily concerned with QoS. As our local search algorithm is implemented in Python, it may be argued that MiniZinc has an unfair advantage in that its solvers are written in C; however, the highly optimised nature of constraint solvers is actually a strong argument in favour of adopting them, particularly as they improve through continuous development over time. The fifth value for local search QoS in Figure 4a is lower than the preceding and following values, which suggests that there is a certain amount of variance in the results produced by local search, based on the seed provided. To measure the variance, we repeated the experiment ten times using local search, and present the results in Figure 4c. This underlines the fact that the performance of local search is quite variable, although it generally makes steady progress over time. VI. RELATED WORK Much work has been performed in the area of task allocation in distributed robotics, where different types of optimisation problems have been addressed. A comprehensive taxonomy can be found in [12], which in turn is based on an earlier taxonomy [8]. According to these taxonomies, the task variant allocation problem presented in this paper falls in the category of Cross-schedule Dependencies (XD), that is, the effective utility of each individual task-agent allocation depends on both the other tasks an agent is performing, and the tasks other agents are performing. Several types of system configurations are supported within this category — e.g. MT-SR-IA considers multi-task robots (MT), single-robot tasks (SR), and instantaneous tasks assignment (IA). Furthermore, problems in this category can be formulated with different types of mathematical models. In our case, we use a special form of knapsack formulation (Section II). Below we outline key related work in distributed robotics falling in the same category, highlighting how our work differs from past research. The first difference arises from the number of tasks and agents considered. Prior work based on the linear assignment problem [22] assumes a single task per agent [20, 16, 15, 14]. In our case, the number of tasks is equal to or greater than the number of agents (and the number of variants is greater still). A second point is related to the number of agents simultaneously completing tasks. In [6, 27, 2] several agents are required, which is a subset of our problem. Another consideration is that our system is fully heterogeneous, i.e. all tasks and processors may be different. Some past work does assume heterogeneous tasks and multiple instances of every task [19], but does not consider different variants of the same task, which is the principal addition to the problem here. Aleti et al. [1] provide a high-level general survey of software architecture optimisation techniques. In their taxonomy, our work is in the problem domain of design-time optimisation of embedded systems. We explore optimisation strategies that are both approximate and exact. We evaluate our work via both benchmark problems and a case study. In terms of the taxonomy in [1] our work is particularly wideranging. Finally, Huang et al. [10] consider the selection and placement of task variants for reconfigurable computing applications. They represent applications as directed acyclic graphs of tasks, where each task node can be synthesised using one of four task variants. The variants trade off hardware logic resource utilisation with execution time. Huang et al. use an approximate optimisation strategy based on genetic algorithms to synthesise the task graph on a single FPGA device. To summarise, no existing work in the robotics field addresses all of the considerations that our proposal does, i.e. a constrained, distributed, heterogeneous system with more tasks than nodes and different variants for the tasks. VII. CONCLUSION We have addressed a unique generalisation of the task allocation problem in distributed systems, with a specific application to robotics. We advocate the use of task variants, which provide trade-offs between QoS and resource usage by employing different algorithms and/or taking advantage of heterogeneous hardware. We have presented a mathematical formulation of variant selection and assignment, and evaluated three solution methods on instances from a problem generator based on a robotics case study. We conclude that our solution methods are very effective in selecting and allocating variants such that QoS is optimised and resource usage minimised. We find high-quality solutions that translate well to real systems, providing a useful tool for the system architect. ACKNOWLEDGMENTS This work was supported by the AnyScale Applications project under the EPSRC grant EP/L000725/1, and partially by the EPSRC grants EP/F500385/1 and EP/K503058/1, and the BBSRC grant BB/F529254/1. We thank Ornela Dardha for her valuable help in the problem formulation. REFERENCES
{"Source-Url": "http://www.research.ed.ac.uk/portal/files/25121749/cano_RSS_2016_2.pdf", "len_cl100k_base": 9627, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34652, "total-output-tokens": 11579, "length": "2e13", "weborganizer": {"__label__adult": 0.0003650188446044922, "__label__art_design": 0.0005578994750976562, "__label__crime_law": 0.0005612373352050781, "__label__education_jobs": 0.0014352798461914062, "__label__entertainment": 0.00010991096496582033, "__label__fashion_beauty": 0.00020742416381835935, "__label__finance_business": 0.00042128562927246094, "__label__food_dining": 0.0003705024719238281, "__label__games": 0.0010890960693359375, "__label__hardware": 0.003612518310546875, "__label__health": 0.0008668899536132812, "__label__history": 0.0004875659942626953, "__label__home_hobbies": 0.00021207332611083984, "__label__industrial": 0.0014276504516601562, "__label__literature": 0.0002872943878173828, "__label__politics": 0.0003733634948730469, "__label__religion": 0.0004813671112060547, "__label__science_tech": 0.469482421875, "__label__social_life": 0.00010508298873901369, "__label__software": 0.011932373046875, "__label__software_dev": 0.50341796875, "__label__sports_fitness": 0.0004494190216064453, "__label__transportation": 0.0014352798461914062, "__label__travel": 0.000244140625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48114, 0.02968]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48114, 0.43969]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48114, 0.88377]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5938, false], [5938, 12118, null], [12118, 17184, null], [17184, 22147, null], [22147, 28130, null], [28130, 33479, null], [33479, 38808, null], [38808, 42898, null], [42898, 48114, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5938, true], [5938, 12118, null], [12118, 17184, null], [17184, 22147, null], [22147, 28130, null], [28130, 33479, null], [33479, 38808, null], [38808, 42898, null], [42898, 48114, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48114, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48114, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5938, 2], [5938, 12118, 3], [12118, 17184, 4], [17184, 22147, 5], [22147, 28130, 6], [28130, 33479, 7], [33479, 38808, 8], [38808, 42898, 9], [42898, 48114, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48114, 0.09607]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
8a426f904d311d3110ae1898ab439432ce569518
SELF-ADAPTATION BY PROACTIVE MEANS-END REASONING L. Sabatucci, M. Cossentino Rapporto Tecnico N.: RT-ICAR-PA-02-06 Data: Febbraio 2016 Self-Adaptation by Proactive Means-End Reasoning L. Sabatucci and M. Cossentino ICAR-CNR, Palermo, Italy {sabatucci, cossentino}@pa.icar.cnr.it April 27, 2016 Abstract Run-time goal-model artifacts represent a notable approach to communicate requirements to the system and open new directions for dealing with self-adaptation. This work presents a theoretical framework and a general architecture for system evolution, self-configuration and self-healing. The novelty is that of breaking design-time constraints between system goals and tasks. The user may inject, at run-time, goal-models that do not contain tasks, i.e. the description of how to address them. Therefore, the architecture is responsible to configure its components as the result of deductions made at the knowledge level. The strength of this architecture is to promote reusability and domain independence. Finally, the proposed implementation of the architecture has been evaluated in the context of self-configuration and self-healing through the execution of a set of randomized stress tests. 1 Introduction Modern distributed and open software systems raise the need to integrate several heterogeneous components and environments into corporate-wide computing systems, and to extend their working boundaries beyond companies into the Internet [34]. As long as software systems grow in size, complexity, heterogeneity and interconnection, it becomes central to design and implement them in a more versatile, flexible, resilient and robust way. The IBM manifesto of autonomic computing [34], released in 2001, suggests a promising direction for facing software complexity through self-adaptation. Direction that is detailed through many research roadmaps [17, 24] of software engineering for self-adaptive system that define self-adaptive systems as those systems able to autonomously modify their behavior and/or their structure in response to their perception of the environment, and the operative context, in order to address their goals [24]. The vision of computing systems that can manage themselves is fascinating. They are able of changing their behavior at run-time in order either to maintain or enhance their functions [17]. Self-adaptation has deep roots in several research fields, as for instance, artificial intelligence, biological inspired computing, robotics, requirements/knowledge engineering, control theory, fault-tolerant computing, and so on. In the last decade, the large and heterogeneous number of works concerning self-adaptation investigated several aspects of the problem, for instance specific architectures for implementing adaptive control loops [46], self-organizing paradigms [4], adaptive requirements [23] an so on. However, to date, many of these problems still remain significant intellectual challenges [17, 24]. Among the others, one point is becoming clear: as long as self-adaptive systems become reality, human users (not only managers) will inevitably participate to the process of adaptation [7]. This point is central for the models@runtime community [9] that is looking for appropriate artifacts to shorten the distance between user and system through a model of requirements and functionality at a high level of abstraction. However, traditional requirements specification languages need to evolve for explicitly encapsulating points of variability in the behavior of the system [40] and elements of uncertainty in the environment [60]. These elements must be first class entities the system can exploit to decide how to act. Currently goal-oriented methodologies [14, 23] represent the trend for specifying how a software system may adapt through the conceptualization of system’s objectives and system variation points. In particular goal-models allow describing alternative ways to address system’s objectives. Goals represent “invariant points” that motivates the whole mechanism of adaptation. In previous works we observed that functional requirements could be runtime entities, to provide to the system according to specific user needs. We also adopted goals as a primary way to describe system’s objectives. Moreover we explored a mechanism for injecting or changing goal-models during system’s execution. To this aim we defined a human-oriented language for specifying system goals [55]. We also set up a formal background, based on the concept of state of the world, for allowing the system to run when the specifications of how to address goals are not provided together with the goal model. The result is the PMR Ability, i.e. a facility of the system for autonomously deciding how to operationalize a given goal for which it has no hard-coded knowledge [51]. This paper aims at refining the problem of proactive means-end reasoning and implementing a general architecture for adaptation that, working at the knowledge level [42], is independent from any specific application context, but it rather can be reused in many domains. A specific focus is given to atomic and self-contained portion of behavior, called capabilities, which implement the paradigm of full-reuse [6]. Indeed their peculiarity is of being automatically composable, on demand, in order to build system functionalities and to address dynamic and evolving goals. The proposed architecture integrates the MAPE-K model [46, 15] in order to deal with three characterizations of self-adaptation: system evolution, self-configuration and self-healing. A prototype of the architecture has been implemented in JASON [11] a declarative programming language based on BDI theory [13]. We also randomly generated a set of stress tests to evaluate the performance of self-adaptation. The result provided us interesting findings for planning future works. The paper is structured as follows: Section 2 presents the theoretical background and defines some basic concepts. Section 3 presents a knowledge-level approach for solving the proactive means-end reasoning problem through a top-down strategy combined with an algorithm for capability composition. Section 4 presents the architecture based on the ability to solve the proactive means-end reasoning problem and the MAPE-K model. Section 5 presents the results of a set of tests, compares the approach with some relevant works from the state of the art and, finally, discuss strengths and limits of the approach. Section 6 briefly summarizes the proposed architecture. Other details of the prototype are in Appendix. 2 Background and Definition This section illustrates the theoretical background that introduces the basic concepts of this paper. 2.1 State of the World and Goals We consider software system has (partial) knowledge about the environment in which it runs. The classic way for expressing this property is (Bel $a \varphi$) [61] that specifies that a software agent $a$ believes $\varphi$ is true, where $\varphi$ is a generic state of affair. We decided to limit the range of $\varphi$ to first order variable-free statements (facts). They are enough expressive for representing an object of the environment, a particular property of an object or a relationship between two or more objects. A fact is a statement to which it is possible to assign a truth-value. Examples are: tall(john) or likes(john, music). **Definition 1 (State of the World)** The state of the world in a given time \( \tau \) is a set \( W^\tau \subseteq S \) where \( S \) is the set of all the (non-negated) first order variable-free statements (facts) \( s_1, s_2 \ldots s_n \) that can be used in a given domain. \( W^\tau \) has the following characteristics: \[ W^\tau = \{ s_i \in S | (Bel \ a \ s_i) \} \] where \( a \) is the subjective point of view (i.e. the execution engine) that believes all facts in \( W^\tau \) are true at time \( \tau \). \( W^t \) describes a closed-world in which everything that is not explicitly declared as true is then assumed to be false. An example of \( W^t \) is \{tall(john), age(john, 16), likes(john, music)\}. A State of the World is said to be **consistent** when \( \forall s_i, s_j \in S\) \[ \text{if} \ \{ s_i, s_j \} \vdash \text{then} \ \begin{cases} s_i \in W^\tau \Rightarrow s_j \notin W^\tau \\ s_j \in W^\tau \Rightarrow s_i \notin W^\tau \end{cases} \] i.e.: it contains only facts with no (semantic) contradictions. For instance the set \{tall(john), small(john)\} is not a valid state of the world since the two facts produce a semantic contradiction. A **Condition** \( \varphi : W^\tau \rightarrow \{true, false\} \) of a state of the world is a logic formula composed by predicates or variables, through the standard set of logic connectives (\( \neg, \land, \lor \)). A condition may be tested against a given \( W^\tau \) through the operator of unification. For instance, the condition \( \varphi = \text{likes(Someone, music)} \land \text{age(Someone, 16)} \) is true in the state of the world \{tall(john), age(john, 16), likes(john, music)\} through the binding Someone \( \rightarrow \) john that realizes the syntactic equality. In many Goal-Oriented requirements engineering methods the definition of **Goal** [14] is: “a goal is a state of affair that an actor wants to achieve”. We refined this statement to be compatible with the definition of \( W^t \) as: “a goal is a desired change in the state of the world an actor wants to achieve”, in line with [1]. Therefore, to make this definition operative, it is useful to characterize a goal in terms of a triggering condition and a final state. **Definition 2 (Goal)** A goal is a pair: \( \langle tc, fs \rangle \) where \( tc \) and \( fs \) are conditions to evaluate (over a state of the world). Respectively the \( tc \) describes when the goal should be actively pursued and the fs describes the desired state of the world. Moreover, given a \( W^t \) we say that \[ \text{the goal is addressed iff } tc(W^t) \land \Diamond fs(W^{t+k}) \text{ where } k > 0 \] i.e. a goal is addressed if and only if, given the trigger condition is true, then the final state must be eventually hold true somewhere on the subsequent temporal line. **Definition 3 (Goal Model)** A goal model is a directed graph, \((G,R)\) where \(G\) is a set of goals (nodes) and \(R\) is the set of Refinement relations (edges) i.e. relations that provide a hierarchical decomposition of goals is sub-goals through AND/OR operators. In a goal model there is exactly one root goal, and there are no refinement cycles. This definition has been inspired by [22] but we explicitly removed Influence [22] relations and Means-End [14] relations from the definition. The influence relation prescribes a change in the satisfaction level of a goal affects the satisfaction level of its adjacent goal. It is not currently used in our theoretical model. Whereas means-end links provide a direct connection between a goal and the procedure the system would engage to address it. They are not in the definition of goal-model because the system generates them at run-time. Figure 1 is the partial goal model, represented with the \(i^*\) notation, for the meeting scheduling case study. This example, redesigned from [22], includes functional (hard) goals only, and AND/OR refinements. The root goal is to provide meeting scheduling services: it is decomposed into schedule meetings, send reminders, cancel meetings and running a website. Therefore meetings are scheduled by collecting participant timetables, choosing a schedule and a location. Such a model is useful for analysts to explore alternative ways to fulfill the root goal. ### 2.2 Proactive Means-End Reasoning In many goal-oriented approaches, a Task is defined as the operationalization of a Goal. This means that each task, in a goal model, is associated to one (or more) leaf goal(s). This association is made at design time as the result of a human activity called means-end analysis. In the \(i^*\) conceptual model [62], a means-end link introduces a means to attain an end where the end can be a goal, task, resource or softgoal, whereas the means is usually a task. The TROPOS methodology [14] introduces means-end analysis as the activity for identifying (possibly several alternative) tasks to satisfy a goal. The task is therefore an analysis entity that encapsulates how to address a given goal according to the following statement: “a Task T is a means to a Goal G (G being the end) when one or more executions of T produce a post-situation that satisfies G” [31]. This paper introduces the concept of system Capability for highlighting the difference between means-end analysis made at design-time and at run-time. **Definition 4 (Capability)** A capability \((evo, pre, post)\) is an atomic and self-contained action the system may intentionally use to address a given evolution of the state of the world. The evolution, denoted as \(evo : W \rightarrow W\) is an endogenous change of the state of the world that takes a state of the world \(W^t\) and produces a new state of the world \(W^{t+1}\) by manipulating statements in \(W^t\). The capability may be executed only when a given pre-condition is true \((pre(W^t) = true)\). Moreover, the post-condition is a run-time helper to check if the capability has been successfully executed \((post(W^{t+1}) = true)\). Explicit differences between the concepts of Capability and Task, will be discussed in the following. **Capabilities and Goals.** Whereas a task has an explicit link to a goal, a capability is relatively independent from a specific goal. The concept of capabilities raises up as the attempt to provide goal models at run-time. (goal-injection) that do not contain tasks. The system is assumed to own a repository of capabilities to be used for addressing one of the injected goals. The connection between Capabilities and Goals relies on the enclosed semantics. In order to evaluate if a capability may satisfy a goal the system generates and tries to solve a system of equations obtained by the current state of the world, the capability’s pre/post conditions and goal’s trigger/final state. Given $W^k$, $c_j = \langle \text{evo}_j, \text{pre}_j, \text{post}_j \rangle$ will address $g_i = \langle \text{tc}_i, \text{fs}_i \rangle$ iff: $$\begin{align*} s &= \text{true}, \forall s \in W^k \\ t_i(W^k) &= \text{true} \\ \text{pre}_j(W^k) &= \text{true} \\ \text{evo}_j(W^k) &= W^{k+1} \\ \text{post}_j(W^{k+1}) &= \text{true} \\ \text{fs}_i(W^{k+1}) &= \text{true} \end{align*}$$ (4) This problem can be easily translated, through predicate resolution, into a boolean satisfiability problem [8] whose details are out of the scope of this paper. **Composition of Capabilities.** In order to increase the variability of system behavior this work assumes that it is convenient to decompose functionality in its atomic (but self-contained) components. It is the contextual composition of this parts that may produce a range of possible results. For this reason, capabilities are composable entities. Their composition is not specified in a design-time model, but it can be deduced at run-time by checking the satisfiability of pre and post conditions [8]. When capabilities are composable then System of equations 4 changes for including the resulting evolution function as the sum of each single capability’s evolution. **Parametric Capabilities:** a task is generally arranged for a particular working context and therefore it is scarcely reusable. Conversely a capability is conceived with the objective of being reusable as much as possible. To this aim a capability may be ‘parametric’ i.e. it may specify some input/output ports. As a consequence pre/post and evolution expressions contains some logical variables. The robotic-style capability for moving an physical object is an example of parametric capability; its pre-condition is at$(X_1, Y_1)$ whereas the post-condition is moved to$(X_2, Y_2)$ where $X_1$, $X_2$ and $Y_1$, $Y_2$ must be specified for making the action concrete. Intuitively, depicting the space of solutions as a Cartesian plane where points represent states of the world, a Capability may be intuitively expressed as a vector that induces a movement from a state $A$ to a state $B$. A parametric capability is therefore drawn as a family of vectors where the initial state and the final state are subject to variability. The strength of parametric capabilities is that they could be used in different circumstances and they are more versatile in compositions. According to the principle that capabilities have not an explicit link to goals, the proposed approach is based on delegating to the system the responsibility to establish which capability to select (or in alternative which composition of capabilities to compose) and to configure its parameters for addressing a given goal. **Definition 5 (Operationalization)** The Operationalization is defined as the tuple $\langle g, h \rangle$ where $g$ is the goal to address and $h$ is the instance of a simple or composed capability, assigned for making the goal operational, where all parameters have been assigned to a ground value. Setting the operationalization of a whole goal model is a problem formalized as follows: **Problem 1 (Proactive Means-End Reasoning)** Given the current state of the world $W_I$, a Goal Model $(G, R)$ and a set of available Capabilities $C$, the Proactive Means-End Reasoning is the problem of finding a complete and minimal set of operationalization for the goal model. We denote with Configuration a solution to the Proactive Means-End Reasoning problem. A Configuration is therefore a set of tuple $\langle g_i, h_j \rangle$ where $g_i \in G$ and $h_j$ may be a simple or composed capability. Given a goal model $(G, R)$, a configuration $cnf$ is said to be $$ \text{complete iff } \forall g_i \in G, \exists h_j : \langle g, h \rangle \in cnf; \text{ otherwise it is partial}; $$ $$ \text{minimal iff } \forall g_i \in G, \nexists h_k, h_r : \langle g_i, h_k \rangle, \langle g_i, h_r \rangle \in cnf, \langle g_i, h_r \rangle \in cnf. $$ It is worth noting that: 1. next sections are going to illustrate an approach for solving Problem 1; for the sake of clarity we use the following terminology: Proactive Means-End Reasoning is a shortcoming for Problem 1, whereas PMR ability refers to the algorithm for solving the problem; 2. Problem 1 is different from a scheduling problem since it does not require an exact timing of the activities and it is different from a planning problem because it does not require to create a workflow for executing the activities [25]; 3. when solving the Proactive Means-End Reasoning problem, discovering more configurations produces an additional value for the purpose of adaptation. Indeed, it allows comparing them according to meta-properties (for instance the quality of service). This is possible under the assumption that \( C \) is a redundant set of capabilities, and therefore it is possible to replace a capability either with other simple or with composed ones. Indeed, redundancy represents the common operative context for several works in the area of self-adaptive systems [44, 40, 22]. 3 Solving the Problem at the Knowledge Level In this section we introduce an approach to Problem 1 that is based on the concept of state of the world to model a dynamic knowledge base. We make the assumption that the solution to the Proactive Means-End Reasoning problem should not depend on the actual data of the environment, but rather its flow of operations and interactions depend on how this data is represented in abstract form. Reasoning at the knowledge level [42], it is possible to represent complex abstract data that is instantiated only at run-time. This simplifies the problem by only modeling those features of the environment that are relevant for the execution (properties to monitor and environment entities to manipulate). For instance, even if we do not know all the users of the meeting scheduler, we are able of implementing a capability that checks whether a desired participant is available or not for the meeting through the predicate \( \text{available}(\text{User}, \text{Meeting}) \). At the same way we may specify a generic goal concerning the selection of a location for the meeting through the predicate \( \text{assigned}(\text{Meeting}, \text{Location}) \). In order to make the algorithm affordable we obtain the knowledge level automatically from specifications of goals and capabilities. Therefore, evaluating the contextual fulfillment of goals and the compatibility of capabilities in composition may be done through symbolic checking techniques. The proposed approach for implementing a PMR Ability uses a two-steps strategy that combines a top-down ‘divide’ method with a bottom-up ‘merge’ method. The top-down goal decomposition explores a hierarchy by decomposing the problem space into smaller disjoint sub-spaces according to the structure of the goal model and available capabilities. Then it uses a STRIPS-based [26] approach for bottom-up composition of simpler capabilities into more complex ones. 3.1 Top-Down Goal Decomposition Given a goal model \((G, R)\) where \(g_{\text{root}} \in G\) is the top goal of the hierarchy, the first step of the proposed procedure is to explore the hierarchy of goals, starting from \(g_{\text{root}}\) in a top-down recursive fashion. The algorithm exploits AND/OR decomposition relationships to deduct the addressability of a goal according to its sub-goals. The objective is to obtain at least a complete configuration that addresses the problem. However, when possible, it will return a set of alternative configurations. Let us indicate with \(cnf_i = (o_1, o_2, \ldots, o_n)\) a complete/partial configuration for the fulfillment of the goal model where \(o_i = (g_i^h_i)\) are the operationalizations. Therefore we use the following notation for indicating a generic solution set generated by the algorithm: \(\{cnf_1, cnf_2, \ldots cnf_k\}\). For instance, \(\{(g_A, h_1), (g_B, h_2)\}\) indicates a solution set made of two configurations, each one composed by only one operationalization. Conversely \(\{(g_C, h_3), (g_D, h_4)\}\) represents a solution set that contains only one configuration, made of a couple of operationalizations. The first step of the algorithm is to check if a goal is either a leaf or it is decomposed into sub-goals. When the goal is not a leaf, if the relationship is an AND decomposition the result is the permutation of all the solutions found for each children node. Example: if a goal \(g_A\) is AND-decomposed in two sub-goals \(g_B\) and \(g_C\), and the algorithm finds \[ \begin{align*} \text{sol.set}_B &= \{(g_b, c_1), (g_b, c_2)\} \\ \text{sol.set}_C &= \{(g_c, c_3)\} \end{align*} \] then the composed solution of \(g_A\) is \[ sol.set_A = \{(g_b, c_1, g_c, c_3), (g_b, c_2, g_c, c_3)\} \] If the relationship is an OR decomposition the result is the union of all the solutions found for each children node. Example: if a goal \(g_A\) is OR decomposed in two sub-goals \(g_B\) and \(g_C\), and the algorithm finds **ALGORITHM 1**: Means End Reasoning (part I - exploring goal hierarchies) **Input**: $GM$ is the goal-model to address, $g_{\text{target}}$ is the goal analyzed at this step of the procedure, $W_i$ is the current state of the world and $C$ is the set of available capabilities. **Output**: The set of solutions $so_{\text{set}}$. **Function means_end_reasoning**($GM$, $g_{\text{target}}$, $C$) begin if $g_{\text{target}}$ is leaf then $h_{\text{set}}$ ← compose_capabilities($C$, $GM$, $g_{\text{target}}$); foreach $h_i \in h_{\text{set}}$ do add_solution($so_{\text{set}}$,$(g_{\text{target}}, h_i)$); end else dec_type ← get_decomposition_type($g_{\text{target}}$, $GM$); subgoals ← get_subgoals($g_{\text{target}}$, $GM$); foreach $g_i \in$ subgoals($g_{\text{target}}$) do sub_sol ← means_end_reasoning($GM$, $g_i$, $C$); if dec_type is AND then $so_{\text{set}}$ ← permutation($so_{\text{set}}$, sub_sol); else if dec_type is OR then $so_{\text{set}}$ ← union($so_{\text{set}}$, sub_sol); end end return $so_{\text{set}}$ end \[ \begin{align*} so_{\text{set}}_B & = \{((gb, c_1)), ((gb, c_2))\} \\ so_{\text{set}}_C & = \{((gc, c_3))\} \end{align*} \] (9) then the composed solution of $g_A$ is \[ so_{\text{set}}_A = \{((gb, c_1)), ((gb, c_2)), ((gc, c_3))\} \] (10) Otherwise when the target goal is a leaf goal then it is necessary to search for a capability or a composition of capabilities that is able to satisfy such a goal. This procedure is discussed in the next section. 3.2 Bottom-Up Capability Composition A capability produces a state of the world evolution. At the same way, the composition of capabilities produces a multi-step world evolution. The capability composition is a procedure that explores the potential impact of a sequence of capabilities with respect to the initial state of the world and the desired goal to address. The outcome of composing capabilities is modeled as a state transition system where nodes are states of the world and transitions are due to component capabilities: **Definition 6 (State of the World Transition System)** A State of the World Transition System (WTS) is a 5-tuple \( (S, W_I, C, E, L) \) where - \( S \) is the finite set of reachable states of world; - \( W_I \in S \) is the initial state of the world; - \( C \) is the finite set of available capabilities; - \( E \) is the transition relation made as a finite set of evolution functions where \( \text{evo} \in E \) : \( W \times W \) - \( L : S \rightarrow \text{Score} \) is the labeling function that associates each state to a score that measures (i) the distance from the final state and (ii) the quality of the partial paths and therefore it estimates the global impact in satisfying the whole goal-model. The procedure for incrementally building the WTS is reported in Algorithm 2. The inputs of the algorithm are the current state of the world \( W_I \), a generic goal \( g_{\text{target}} \in G \) of the goal model and the set of available capabilities \( C \). The objective is to explore the endogenous effects of combinations of capabilities with the aim of addressing \( g_{\text{target}} \). At each step the algorithm gets most promising state of the world \( W_i \) to explore (this is evaluated through a score that is discussed later in this section). Then it extracts the \( CS \) as the shortest sequence of capabilities that produces the evolution from \( W_I \) to \( W_i \). First, it checks if \( CS \) satisfies the goal \( g_{\text{target}} \) according to Equation 3. In other words, given the Triggering Condition and the Final State of the goal, the sub-procedure \text{check\_cs\_is\_solution} explores the evolution sequence to check if both TC and FS are satisfied by states of the world and if FS=true occurs after that TC=true (see Figure 2). In the case \( CS \) satisfies the goal then the capability sequence represents a solution and it is added to the \( h\_set \). ALGORITHM 2: Means End Reasoning (part II - composing capabilities) Input: $GM$ is the goal-model to address, $g_{\text{target}}$ is the goal for which finding a capability or a composition of capabilities, $W_i$ is the current state of the world and $C$ is the set of available capabilities. Output: $h$ is a capability or a composition of capabilities that satisfies $g_{\text{target}}$. Function $compose_{\text{capabilities}}(GM, g_{\text{target}}, W_I, C)$ begin $WTS \leftarrow \text{initialize\_space}(W_I)$; while $|h_{\text{set}}| < \text{max\_}\text{h\_set}$ AND $|WTS| < \text{max\_space}$ do $W_i \leftarrow \text{get\_highest\_scored\_state}(WTS)$; $CS \leftarrow \text{path\_from\_to}(WTS, W_I, W_i)$; if check $CS$ is solution($CS, g_{\text{target}}$) then $\text{add\_solution}(h_{\text{set}}, CS)$; $\text{mark\_as\_solution}(WTS, CS)$; else $\text{cap\_set} \leftarrow \text{get\_next\_capabilities}(W_i, CS, WTS)$; $\text{expand\_and\_score}(WTS, W_i, \text{cap\_set})$; end end return $h_{\text{set}}$ Conversely, the procedure selects a set of capabilities that may be used to expand the $WTS$. The first criterion to select capabilities filters those that may be executed in $W_i$: i.e. it considers only capabilities whose pre-condition is true in $W_i$: $$cap\_set' = \{\langle evo, pre, post \rangle \in C | pre(W_i) = true \}$$ \hspace{1cm} (11) However this set may be further restricted to exclude irrelevant capabilities that do not produce significant changes into the state of the world: $$cap\_set = \{\langle evo, pre, post \rangle \in cap\_set' | evo(W_i) \cap \{W_I, W_1, \ldots, W_i\} \}$$ \hspace{1cm} (12) Finally, the sub-procedure $\text{expand\_and\_score}$ for each $c_i \in cap\_set$ creates a new transition in the $WTS$ from $W_i$ to the new state of the world $evo_{c_i}(W_i)$. The generated states of the world are subsequently labeled with the score function. The score function provides an indication of quality of a sequence of states of the world $seq = \{W_1, W_2, \ldots, W_i\}$ with respect to the goal to address, and therefore it measures how promising is the corresponding sequence of capabilities $CS$. The score function has been designed to drive the algorithm to explore combinations that are more promising for the satisfaction of the goal, decreasing at the same time the size of the explored space. For instance, a sequence of states in which $TC = true$ is more interesting than one where $TC = false$. Following this idea, given that a state of the world is made of statements, it is necessary to introduce the principle that each of these statements may provide or not a contribution for asserting a goal is satisfied. For instance if the goal is to print and send a document, the statement $printed(doc)$ could produce a positive impact to the goal. According this observation, we state two principles for comparing states of the world obtained by capability composition: - the principle of convergence i.e. the more a state of the world contains statements that provide a positive impact to a goal, the more the solution is near to be complete for addressing it, according Equation 5; - the principle of precision i.e. the more a state of the world contains statements that does not provide a positive impact to a goal, the more it is minimal for addressing it, according Equation 6; As a consequence we can specify the function as follows: $$\text{score}(W_i, g_{target}) = \frac{1 + \text{num\_relevant\_statements}(W_i, g_{target})}{\text{num\_statements}(W_i)}$$ \hspace{1cm} (13) where, given a state $W$, $\text{num\_statements}(W)$ is the cardinality of $W$, i.e. the number of statements contained in $W$, whereas $\text{num\_relevant\_statements}(W, g)$ is defined as the number of statements contained in $W$ that positively con- tribute to make $TC_g \land FS_g = \text{true}$. For instance, if $W = \{s_1, s_2, s_3, s_4, s_5\}$ and $g = (s_2 \land s_8, s_4 \lor s_5)$ then $\text{num\_statements} = 5$ and $\text{num\_relevant\_statements} = 3$ because \{s_2, s_4, s_5\} are relevant for $g$. Figure 3 illustrates Function 13 plotted as a stacked line chart for high- lighting the score trends. Making the $\text{num\_relevant\_statements}$ constant, the value increases when the total number of statements in $W_i$ decreases (principle of precision). Therefore, a state of the world that contains fewer statements is considered more promising than another that contains more statements. At the same time, making the $\text{num\_statements}$ constant in the formula, the value is higher the more the state is close to goal satisfaction (principle of convergence). This means that a state of the world that contains statements relevant for a goal is considered more promising than another that does not contain relevant statements. ![The SCORE function](image) Figure 3: Line chart of the score function highlights trends of the value when making either $\text{num\_statements}(W)$ or $\text{num\_relevant\_statements}(W, g)$ con- stant. The algorithm terminates when a pre-defined number of solutions has been discovered, or after a maximum number of states of the world has been explored. 4 A General Architecture for Self-Adaptation This section illustrates how the PMR Ability may be the basis for a domain-independent self-adaptive software system. This section discusses the relation between the approach presented in this paper and three fundamental characteristics for self-adaptive systems: system evolution, self-configuration, and self-healing [34, 17]. 4.1 System Evolution Software evolution is a discipline of software engineering that aims at modifying existing software for ensuring its reliability and flexibility over time. In particular, we focus on adaptive maintenance [16] an aspect of software evolution that refers to modification performed to keep software usable in a dynamic environment. Real-world changes continuously and therefore user needs evolve over time. Software that runs in an environment is likely to evolve continuously to adapt to varying requirements and circumstances in that environment. This is translated into functional enhancement and/or into the improvement of performances in order to reflect requirements evolution. A prominent characteristic of the proposed architecture is to handle runtime addition of new requirements and therefore to amount to system evolution [55, 53]. The PMR Ability allows moving a step forward traditional system defined for satisfying a fixed set of hard-coded requirements. It allows adding or changing requirements during run-time (in the form of goal-models). We called this mechanism **Goal Injection** [53]. The user may specify new requirements to inject into the system at run-time and they become a stimulus for modifying its behavior. It is responsibility of the system via the PMR Ability to adapt itself to the new needs. The goal injection is enabled by two components: - on one hand, the system owns a goal injection monitor that waits for goals from the user; - on the other hand, user-goals are run-time entities, as well as other environment properties. The system acquires goals from the user and maintains knowledge of them thus to be able of reasoning on expected results and finally conditioning its global behavior. Of course, existing goals may be retreated as well. Goal injection enables user-requirements to evolve over time [32] without either user-management or restarting the system. This could be fundamental for some categories of domain in which continuity of service is central (financial, service providing and so on). In addition it is possible to increase or enhance the functions of the system just injecting a new set of requirements and updating the repository with new domain-specific capabilities. Given that connections between goals and capabilities are discovered on demand, the architecture is robust to capability evolution and may be used for different problem domains without any other specific customization. 4.2 Self-Configuration Self-configuration is the ability of the system to automatically set up the parameters of its components thus to ensure the correct functioning with respect to the defined requirements [34, 12, 45]. This subsection shows a three-layer architecture that exploits the PMR Ability for generating business logic for requirements fulfillment. In other words the proposed architecture implements self-configuration intended as the ability of a system to autonomously (without explicit management) select and compose a subset of its capabilities to achieve user’s goals. The operative hypothesis is to consider the system owns a repository of capabilities. This set is redundant i.e. in order to solve the same problem the system may exploit different combinations of capabilities. Some of these capabilities have input/output parameters that are to be configured in order to concretely use them. The proposed architecture is made of three layers (Figure 4): the goal layer, the capability layer and the business layer. ![Figure 4: Overview of the three layers architecture with for Self-Configuration.](image) The uppermost layer of this architecture is the *Goal Layer* in which the user may specify the expected behavior of the system in terms of high level goals, according to Definition 2. Goals are no hard-coded in a static goal-model defined at design time. The goal injection phase allows the introduction of user-goals defined at run-time. Goals are interpreted and analyzed and therefore trigger a new system behavior. The second layer is the *Capability Layer*, based on the problem of Proactive Means-End Reasoning. It aims at selecting capabilities and configuring them as a response to requests defined at the top layer. This corresponds to a strategic deliberation phase in which decisions are made according to the (incomplete) system knowledge about the environment. However this layer does not reason on concrete data and it does not consider possible changes in the environment because it would be very costly from a computational perspective. Algorithm 2 is explicitly built for self-configuration, indeed, in the meanwhile a *Configuration* solution is discovered, it searches for dependencies among the capabilities that are selected and it also resolves these dependencies by connecting their input/output ports. The consequent output is a concrete business process obtained by instantiating capabilities into task and data into data objects. In this phase the procedure also specifies dependencies among tasks and how data items are connected to task input/output ports. The third layer is the *Business Layer* that executes the business process generated at the second layer. This layer consists of atomic blocks of computation for acquiring and analyzing real data from the environment and to act for producing the desired state of the world. This layer may easily be implemented by the MAPE-K model [46, 15], well known in literature. It requires i) a Monitoring component that acquires information from the environment, and it updates the system knowledge accordingly; ii) an Analyze component that uses the knowledge to determine the need for adaptation with respect to expected states of the world or capabilities failure; iii) a Plan component that uses the acquired knowledge to synchronize the available capabilities according the goal hierarchy and, finally, iv) an Execute component that modifies the environment by using the appropriate capability. ### 4.3 Self-Healing Self-healing is the ability of the system to automatically discover whenever requirements fail to be fulfilled and to work around encountered problems, thus to restore fulfillment of the requirements and to grant continuous functioning with respect to the defined requirements [34, 33]. In the previous section we have adopted the MAPE-K model [46, 15] for implementing the business layer of the presented architecture. According the roadmap of self-adaptive systems [17], one of the principles for implementing self-healing is to explicitly focus on the ‘control loop’ as an internal mechanism for controlling the system’s dynamic behavior. The most famous control architecture is the MAPE-K model and we propose to place the PMR Ability on top of the MAPE-K architecture in order to generate a macro-loop for self-healing, as shown in Figure 5. The macro activities of the resulting architecture are: monitor goal injection, proactive means-end reasoning and MAPE-K loop. In the Goal Injection phase the user communicates her requirements to the system. The system reacts to the injection of new goal by activating the PMR Ability in order to assemble a solution for addressing the whole goal model and if at least one solution is discovered, then the system selects the highest scored Configuration and instantiates the corresponding business process, reserving proper resources for its execution. At this stage it is impossible to predict all possible changes in the environment conditions. Therefore the agent activates a sub-cycle of monitoring, analyzing, plan and execution driven by the knowledge of the environment (MAPE-K). If everything goes as planned, the goal will eventually be addressed. However, given that the Algorithm 1 and Algorithm 2 do not consider exogenous changes of the state of affairs, it is possible that unexpected events occur in the environment, during the execution. When system’s monitors capture an unexpected state of the world, and the capabilities in the Configuration are not sufficient to deal with that, then the system recognizes a situation of failure for one of the requirements. This raises a need for adaptation event and the PMR Ability executes again with a different $W_i$ (the current one). The result will be a different Configuration (if possible) for overcoming the unexpected state. The self-adaptation cycle also considers cases in which the execution of a capability terminates with errors. In this case the PMR Ability is re-executed with the shrewdness to mark the capability that failed as ‘unselectable’. 5 Evaluation and Discussion The architecture presented in Section 4 has been implemented in MUSA, a Middleware for User-driven Service Adaptation [21]. MUSA is built as a multi agent system and developed in JASON [11], a declarative programming language based on the AgentSpeak language [49] and the BDI theory [13]. The state of an agent together with its knowledge of the operative environment is modeled through its belief base, expressed by logical predicates. Self-awareness is supported by translating high-level goals’ and capabilities’ specifications into agent’s beliefs [52]. This enabled the development of the agent PMR Ability for reasoning on Goals and Capabilities as first class entities [51, 21]. Additional details on MUSA are provided in Appendix. The rest of this section presents and discusses an evaluation benchmark for MUSA in the context of self-configuration and self-healing. 5.1 Evaluating Self-Configuration The proposed architecture relies on the couple of algorithms for analyzing the goal model and exploring the space of solutions for composing capabilities. This latter algorithm incrementally builds a state transition system where each edge is generated through the evolution function of a capability and each node is a possible state of the world. The state transition system takes the form of a tree where each branch is a different partial/complete configuration for the fulfillment of a given goal. Exploring the whole space of solutions would take an exponential time to complete, however the score function has been designed to drive the order of exploration, thus exploring first most promising directions. Here we present the methodology we adopted to generate sequences of stress test to evaluate the algorithms with respect of self-configuration and self-healing. 1. Random generation of a working context: this step consists in randomly extracting a fixed number of statements from a repository. That context represents the dictionary of terms describing an abstract working context. Example: Dictionary = \[b(e), g(u), l(a), g(a), v(o), z(u), r(u), z(a), v(i), d(e)\]. 2. Random generation of goals to satisfy: each goal is generated by randomly selecting terms from the dictionary. Example: \textit{goal(”g38”, condition(not(z(u))), condition(z(a)))} triggers when the state of the world does not contain the statement \(z(u)\) and it is addressed when the state of the world does contain \(z(a)\). 3. Random generation of the current state of the world: picking an arbitrary number of statements from the dictionary generates a random \(W_I\). Example: \textit{world([r(u)])}. 4. Finally, random generation of a repository of capabilities. Each capability is produced by selecting couples of terms from the dictionary. The first term is the pre-condition and the second term is the post-condition. The evolution function is built consequently. Example: \textit{cap(”c1”, evo([remove(r(u)), add(z(u))]), condition(r(u)), condition(z(u)))}. For operating a comparative benchmark we selected: (i) the couple of algorithms presented in Section 3 in which capabilities are filtered (see Equations 11 and 12) and WTS nodes are scored (thereafter “score-driven search”) and (ii) the same algorithms where the score function is replaced with a breath-first strategy (thereafter “exhaustive search”). Therefore we run series of tests with an incremental growth of number of capabilities, starting from 20, till 70. Each test executes both the score-driven search and the exhaustive search with the same input. We measured the number of visited nodes in the WTS, and the number of discovered solutions. Charts of Figure 6 reports the results obtained by repeating the test 120 times, starting from 20 capabilities and increasing of 10 after every 20 runs. We used a paired t-test for verifying that visited nodes (obtained through the two methods) are significantly different (\(p\text{-value}=0.01\)). Table 1: Analysis of means (t-test) of Visited States of World obtained by the two methods. <table> <thead> <tr> <th>name</th> <th>mean</th> <th>median</th> <th>sd</th> <th>p.value</th> <th>effect size</th> </tr> </thead> <tbody> <tr> <td>1 Score</td> <td>128.23</td> <td>200</td> <td>86.04</td> <td></td> <td></td> </tr> <tr> <td>2 Breath</td> <td>148.80</td> <td>201</td> <td>83.76</td> <td></td> <td></td> </tr> <tr> <td>3 Difference</td> <td>-20.57</td> <td>-1</td> <td>49.50</td> <td>0.01</td> <td>-0.42</td> </tr> </tbody> </table> As it can be seen, the number of visited nodes (and therefore the time-to-complete) is polynomial with respect to the number of capabilities both for the score-driven search and for the exhaustive search (see ‘visited states Figure 6: Data obtained by comparing the algorithms presented in Section 3 with a breath-first strategy. Configurations are the result of 120 executions with random input and increasing of 10 the number of capabilities every 20 runs. of world’ in Figure 6). To some extent this was surprising because we expected an exponential time, given the algorithm is in the class of combinatorial search. A deeper analysis shows that the activity of capability filtering (Equations 11 and 12), done at each step of the algorithm, greatly reduces the space of evolution and therefore state explosion is limited. Figure 6 reveals that the exhaustive search represents an upper boundary for the score-driven algorithm for what concerns performance. Indeed the score-driven search provides better results both from the point of view of the number of visited nodes and for what concerns the number of discovered solutions. We also noted that, taking in consideration only those setting in which at least one solution exists, the average number of visited nodes visited through the score function is definitively better than a exhaustive search strategy (see ‘scenario with solutions’ in Figure 6). 5.2 Evaluating Self-Healing For evaluating this property we have added other three items to the previous methodology for testing. 5. Execute the PMR Ability with the input obtained at previous steps and select one output configuration. 6. Simulate the execution of the configuration and randomly generating an adaptation event. 7. Update the initial state of the world to the current situation at the moment of failure and execute again point 5. Therefore we run a sequence of tests with a fixed number of 40 capabilities, measuring the number of solutions discovered: i) at the first run of self-configuration and ii) after the self-healing. Figure 7: Result of the sequence of tests for self-healing. The dark grey area represents the size of the space of solutions discovered at the first run of self-configuration for each scenario. The light grey area represents the additional space of solutions built as a result of self-healing. Figure 7 represents as filled areas the space of configurations obtained by executing the PMR Ability before and after the self-healing event. Among the 13 scenarios, only in three cases (scenarios 3, 4 and 13) the adaptation failed because the available capabilities were not enough to repair the failure. In all the other cases the procedure performed well, increasing the space of configuration just enough to allow the correct goal fulfillment. As a final note we calculated that new configuration, obtained for overcoming a failure, in average reuses the 72.25% of capabilities used in the first configuration. 5.3 Related Works To date we identified a semantic gap exists between requirement specifications defined at design-time [58, 48] and the concept of goal used at runtime [9]. This represents a limitation especially in the development of self-adaptive and evolving systems. Morandini et al. [41, 40] propose to extend the operational semantics of goal models by characterizing the behavior of run-time goals thus to be directly implemented. The solution is that of enriching the definition of goal by specifying their dynamics and maintaining the flexibility of using different goal types and conditions. Dalpiaz et al. [22] propose a new type of goal model, called runtime goal model (RGM) which extends the former with annotation about additional state, behavioral and historical information. about the fulfillment of goals, for instance explaining when and how many instances of the goals and tasks need to be created. The common element of these couple of approaches is that the behavior of the system is wired into tasks that in turn are wired to goals of the model. Therefore even if the system may select many alternative OR decomposition relationships, it can adapt its behavior but it can not evolve over the pre-defined tasks. SAPERE [63] (Self-Aware Pervasive Service Ecosystems), is a general framework inspired from natural self-organizing distributed ecosystems. SAPERE does promote adaptivity by creating a sort of systemic self-awareness. As well as our approach, their components have, by design, an associated semantic representation. These live semantic annotations are similar to service descriptions and enable dynamic unsupervised interactions between components. Baresi et al. [5] introduce the concept of adaptive goals as means to conveniently describe adaptation countermeasures in a parametric way. An adaptive goal is described as an objective to be achieved, a set of constraints and a sequence of actions to fulfill the aforementioned objective. The same author proposes A-3 [4], a self-organizing distributed middleware aiming at dealing with high-volume and highly volatile distributed systems. It focuses on the coordination needs of complex systems, yet it also provides designers with a clear view of where they can include control loops, and how they can coordinate them for global management. As well as our approach they consider requirements as run-time entities even if they do not propose a dynamic execution model in which their goals are injected at run-time. In addition they introduce fuzzy goals for expressing the satisfaction degree of requirements that is a possible future direction for extending our definition of goal. Gorlick et al. [29] present an approach to manage runtime change called Weaves. A weave is an arbitrary network of tool fragments that communicate asynchronously. Similar to our concept of capability, a tool fragment is a small software component that performs a single, well-defined function and may retain state. Blanchet et al. [10] present the WRABBIT framework that supports self-healing for service orchestration through conversation among intelligent agents. Each agent is responsible for delivering services of a participating organization. Globally they are able of discovering when one agent’s workflow changed unilaterally because it may incur conversation errors with other agents. An agent also recognizes mismatches between its own workflow model and the models of other agents. The limit of such approach is that it is domain oriented, since the possible errors must be defined at design-time. Extending the WRABBIT’s approach for handling unexpected not-understood situa- tions could be an interesting direction for our work. Kramer and Magee [35] propose a three-layer architecture for self-adaptation inspired from robotics. The architecture includes (i) a control layer, a reactive component consisting of sensors, actuators and control loops, (ii) a sequencing layer which reacts to changes from the lower levels by modifying plans to handle the new situation and (iii) a deliberation layer that consists in time consuming planning which attempts to produce a plan to achieve a goal. The main difference with our architecture is that we introduce a layer for handling goal evolution. Gomaa and Hashimoto [28], in the context of the SASSY research project, look into software adaptation patterns for Service-Oriented applications. Their intuition is that dynamic reconfiguration can be executed by assembly architectural patterns. The objective is to dynamically adapt distributed transactions at run-time, separating the concerns of individual components of the architecture from concerns of dynamic adaptation, using a connector adaptation state-machine. As well as our approach, SASSY provides a uniform approach to automated adaptation software systems, however to date, goal evolution is out of the scope of their work. Souza et al. [57] focus on evolution requirements, that play an important role in the lifetime of a software system in that they define possible changes to requirements, along with the conditions under which these changes apply. Ghezzi et al. [27] propose ADAM (ADAptive Model-driven execution) a mixed approach between model transformation techniques and probability theory. The modeling part consists in creating an annotated UML Activity diagram whose branches can have a probability assigned, plus an annotated implementation. Then an activity diagram becomes a MDP (Markov Decision Process). It is possible to calculate the possible values for the different executions and to thus navigate the model to execute it. 5.4 Strengths, Weakness and Future Works The main strengths of the proposed architecture are summarized below. **Reusability**: capabilities support the paradigm of Full-Reuse [6]. Capabilities are atomic, self-contained and created for being composable. They must be designed for being usable in several contexts and parameters are the key to achieve a finer tuning for a specific problem. Self-configuration is obtained by handling any change by reusing available capabilities. In practice capabilities are the key element of reuse. **Support for Evolution**: the approach relies on the idea that goals, capabilities and their links are not hard-coded. Indeed goals and capabilities are decoupled and goals are injected at run-time. The dynamic connection between capabilities and goals must be discovered at run-time. In addition the repository of capability can be evolved without restarting the system. **Domain Independence**: working at the knowledge level, the problem is modeled through those features of the environment that are relevant for the execution (elements to monitor and to manipulate). The adopted solution is to enclose all the necessary semantics into goals and capabilities. The PMR Ability does not require further information for producing a configuration. The proposed architecture exploits general representation of knowledge for reasoning about capabilities that is independent of the particular application that is driving it [47]. Therefore it is possible to translate from a domain to another one just injecting a new set of requirements and updating the repository with new domain-specific capabilities. The same architecture may serve different problem domains, even at the same time, without any other specific customization. Concluding, a critical analysis of the approach highlights some issues that could be the starting point for improving the proposed architecture. In this approach, as well in state-change models [26], actions are instantaneous and there is no provision for asserting what is true while an action is in execution. Such systems cannot represent the situation where one action occurs while some other event or action is occurring [3]. As a future work we intend to extend this state-of-world based model towards one that includes times, events and concurrent actions [3]. For instance it will be possible to add temporal operators and to test a predicate over some time interval [2, 36]. Another point of discussion concerns the real degree of decoupling between Capabilities and Goals. The authors have introduced the use of an ontology for enabling semantic compatibility between these two elements during the Proactive Means-End Reasoning. We already employed MUSA in 5 research projects with heterogeneous ap- plication contexts, from dynamic workflow [53] to a smart-travel system [54]. However, in our in-vitro evaluation, the same development team created both Capabilities and Goals thus the ontology commitment was ensured. Our experimental phase is based on the assumption that the ontology is built correctly, thus allowing the system to properly work. Another interesting aspect to consider is the impact of the maintenance phase over the ontology, and as a direct consequence, the degree of degradation of capabilities. We experienced that even changing the definition of a single predicate in the ontology has a detrimental impact over the reliability of the system in using its capabilities. 6 Conclusion We have presented a theoretical framework for specifying the problem of Proactive Means-End Reasoning in terms of states of the world, goals and capabilities. Solving the problem at the knowledge level provided us the opportunity to define a general architecture for system evolution, self-configuration and self-healing. This architecture is based on the idea that a user, at runtime, may inject a new goal model without specifying the description of how to address it. The proposed architecture is responsible for configuring and reconfiguring its business layer as the result of reasoning and deductions made at the knowledge level. System evolution is the result of a process of goals management, self-configuration is obtained through the ability of solving the proactive means-end reasoning and finally self-healing is obtained by closing the loop between self-configuration and execution. The strengths of the proposed architecture is to be domain independent and to support reusability across many application contexts. References [50] Patrizia Ribino, Massimo Cossentino, Carmelo Lodato, Salvatore Lopes, Luca Sabatucci, and Valeria Seidita. Ontology and goal model in designing bdi multi-agent systems. WOA@ AI* IA, 1099:66–72, 2013. [54] Luca Sabatucci, Carmelo Lodato, Salvatore Lopes, and Massimo Cossentino. Highly customizable service composition and orchestration. In Schahram Dustdar, Frank Leymann, and Massimo Villari, editors, Service Oriented and Cloud Computing, volume 9306 of Lecture Notes
{"Source-Url": "https://intranet.icar.cnr.it/wp-content/uploads/2016/11/TechReport-16_02.pdf", "len_cl100k_base": 13107, "olmocr-version": "0.1.50", "pdf-total-pages": 34, "total-fallback-pages": 0, "total-input-tokens": 85102, "total-output-tokens": 18551, "length": "2e13", "weborganizer": {"__label__adult": 0.0002799034118652344, "__label__art_design": 0.0005478858947753906, "__label__crime_law": 0.00025582313537597656, "__label__education_jobs": 0.0009713172912597656, "__label__entertainment": 7.802248001098633e-05, "__label__fashion_beauty": 0.000156402587890625, "__label__finance_business": 0.0002892017364501953, "__label__food_dining": 0.0003008842468261719, "__label__games": 0.0005922317504882812, "__label__hardware": 0.000698089599609375, "__label__health": 0.00042319297790527344, "__label__history": 0.00034356117248535156, "__label__home_hobbies": 9.173154830932616e-05, "__label__industrial": 0.0003802776336669922, "__label__literature": 0.0003914833068847656, "__label__politics": 0.00027871131896972656, "__label__religion": 0.0004763603210449219, "__label__science_tech": 0.042938232421875, "__label__social_life": 8.368492126464844e-05, "__label__software": 0.0096588134765625, "__label__software_dev": 0.93994140625, "__label__sports_fitness": 0.00023627281188964844, "__label__transportation": 0.0004591941833496094, "__label__travel": 0.00020122528076171875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 71403, 0.02686]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 71403, 0.56776]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 71403, 0.89512]], "google_gemma-3-12b-it_contains_pii": [[0, 138, false], [138, 1823, null], [1823, 4810, null], [4810, 7283, null], [7283, 9871, null], [9871, 12377, null], [12377, 13770, null], [13770, 16296, null], [16296, 18512, null], [18512, 20968, null], [20968, 23287, null], [23287, 24804, null], [24804, 27253, null], [27253, 29155, null], [29155, 30827, null], [30827, 32449, null], [32449, 34896, null], [34896, 36434, null], [36434, 39122, null], [39122, 41231, null], [41231, 43528, null], [43528, 45983, null], [45983, 47620, null], [47620, 49526, null], [49526, 52394, null], [52394, 54374, null], [54374, 57153, null], [57153, 59365, null], [59365, 61322, null], [61322, 63462, null], [63462, 65476, null], [65476, 67460, null], [67460, 69606, null], [69606, 71403, null]], "google_gemma-3-12b-it_is_public_document": [[0, 138, true], [138, 1823, null], [1823, 4810, null], [4810, 7283, null], [7283, 9871, null], [9871, 12377, null], [12377, 13770, null], [13770, 16296, null], [16296, 18512, null], [18512, 20968, null], [20968, 23287, null], [23287, 24804, null], [24804, 27253, null], [27253, 29155, null], [29155, 30827, null], [30827, 32449, null], [32449, 34896, null], [34896, 36434, null], [36434, 39122, null], [39122, 41231, null], [41231, 43528, null], [43528, 45983, null], [45983, 47620, null], [47620, 49526, null], [49526, 52394, null], [52394, 54374, null], [54374, 57153, null], [57153, 59365, null], [59365, 61322, null], [61322, 63462, null], [63462, 65476, null], [65476, 67460, null], [67460, 69606, null], [69606, 71403, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 71403, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 71403, null]], "pdf_page_numbers": [[0, 138, 1], [138, 1823, 2], [1823, 4810, 3], [4810, 7283, 4], [7283, 9871, 5], [9871, 12377, 6], [12377, 13770, 7], [13770, 16296, 8], [16296, 18512, 9], [18512, 20968, 10], [20968, 23287, 11], [23287, 24804, 12], [24804, 27253, 13], [27253, 29155, 14], [29155, 30827, 15], [30827, 32449, 16], [32449, 34896, 17], [34896, 36434, 18], [36434, 39122, 19], [39122, 41231, 20], [41231, 43528, 21], [43528, 45983, 22], [45983, 47620, 23], [47620, 49526, 24], [49526, 52394, 25], [52394, 54374, 26], [54374, 57153, 27], [57153, 59365, 28], [59365, 61322, 29], [61322, 63462, 30], [63462, 65476, 31], [65476, 67460, 32], [67460, 69606, 33], [69606, 71403, 34]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 71403, 0.01389]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
21c0a76d4aed1634c413bf9d418141ab6dcebcbd
Red Hat Enterprise Linux 4 4.8 Release Notes Release Notes for all Architectures Edition 2 Ryan Lerch Red Hat Engineering Content Services rlerch@redhat.com Abstract This details the Release Notes for Red Hat Enterprise Linux 4.8 # Table of Contents 1. RELEASE NOTES UPDATES ............................................................................. 2 2. INTRODUCTION ................................................................................................. 3 2.1. Lifecycle ..................................................................................................... 3 3. INSTALLATION-RELATED NOTES ................................................................. 4 3.1. All Architectures ....................................................................................... 4 3.2. ia64 Architectures ..................................................................................... 5 4. FEATURE UPDATES ......................................................................................... 6 4.1. All Architectures ....................................................................................... 6 5. KERNEL-RELATED UPDATES ......................................................................... 7 5.1. All Architectures ....................................................................................... 7 5.2. x86-64 Architectures ................................................................................ 11 5.3. s390x Architectures .................................................................................. 11 6. DRIVER UPDATES ............................................................................................ 11 6.1. All Architectures ....................................................................................... 11 6.2. s390x Architectures .................................................................................. 12 7. TECHNOLOGY PREVIEWS ............................................................................... 13 8. RESOLVED ISSUES ......................................................................................... 13 8.1. All Architectures ....................................................................................... 13 9. KNOWN ISSUES ............................................................................................. 16 9.1. All Architectures ....................................................................................... 16 9.2. ia64 Architectures ..................................................................................... 21 A. REVISION HISTORY ......................................................................................... 22 1. RELEASE NOTES UPDATES This section contains additional Release Notes and Updates to existing notes that were not included in the distribution version of the Red Hat Enterprise Linux 4.8 Release Notes. Known Issue: Bugzilla #488186 Running Red Hat Enterprise Linux 4.8 on Lenovo T61 notebooks may cause the system to hang during the boot process, displaying the following error message: ``` mtr: v2.0 (20020519) ACPI: Subsystem revision 20040816 ACPI: Found ECDT ``` To work around this issue, disable ACPI (Advanced Configuration and Power Interface) by adding the command `acpi=off` to the kernel boot parameters. Known Issue: Bugzilla #459785 Japanese language JP106 keyboards will not function correctly when booting into Rescue Mode on Red Hat Enterprise Linux 4.8. Known Issue: Bugzilla #494022 Updating all packages from Red Hat Enterprise Linux 4.7 to Red Hat Enterprise Linux 4.8 on multilib architectures may fail with dependency issues for the `openmpi-libs` package. To work around this issue, use the following commands to update the `compat-dapl` package before updating the remaining packages: ``` up2date compat-dapl up2date -fu ``` Known Issue: Bugzilla #443137 In a typical HA-RAID (High Availability RAID) two-system configuration, two SAS (Serial Attached SCSI) adapters are plugged in to two systems that are connected to a shared SAS disk drawer. However, it is currently possible to set the Preferred Dual Adapter State attribute to Primary on both SAS adapters, which may trigger a race condition and cause infinite failover between the adapters. To prevent this error, if the Preferred Dual Adapter State attribute of one SAS adapters is set to Primary, ensure that the other is set to None. Known Issue: Bugzilla #499457 As a result of N_Port ID Virtualization (NPIV) support added in Red Hat Enterprise Linux 4.8 on s390x architectures, the permanent_port_name sysfs attribute is no longer included. This attribute was used (primarily for debugging purposes) to differentiate the use of NPIV Logical Unit Numbers (LUNs) from within Linux. In the absence of this attribute, system administrators should refer to the Hardware Management Console / Support Element (HMC/SE) to find the virtual port address on an NPIV-enabled system. Known Issue: Bugzilla #435300 Known Issue: In previous versions of Red Hat Enterprise Linux 4, adding the line --- selinux --permissive to a kickstart file sets selinux to permissive mode. However, this line is currently ignored by the installer, leaving selinux set to the default mode: enforcing. To set selinux to permissive mode during a kickstart installation, add the ``` setenforce 1 ``` cmd to the ``` %pre ``` section of the kickstart file. Alternatively, run ``` setenforce 1 ``` after installation is complete. **Known Issue: Bugzilla #455251** In Red Hat Enterprise Linux 4, invoking the kernel system call `setpriority()` with a `which` parameter of type `PRIO_PROCESS` does not set the priority of child threads. **Recommendation: Firefox Restart** Red Hat strongly recommends restarting the Firefox browser after updating the `firefox` package. This will ensure that all Firefox updates take effect. ### 2. INTRODUCTION The following topics are covered in this document: - Installation-Related Notes - Feature Updates - Kernel-Related Updates - Driver Updates - Technology Previews - Resolved Issues - Known Issues #### 2.1. Lifecycle The Red Hat Enterprise Linux 4 Life Cycle is available at: https://www.redhat.com/security/updates/errata/ As previously announced, the release of Red Hat Enterprise Linux 4.8 will mark the beginning of Production 2 phase of the Red Hat Enterprise Linux 4. No new hardware enablement will be expected during this phase. Customers should note that their subscriptions provide access to all currently supported versions of Red Hat Enterprise Linux. 3. INSTALLATION-RELATED NOTES The following section includes information specific to installation of Red Hat Enterprise Linux and the Anaconda installation program. **NOTE** When updating from one minor version of Red Hat Enterprise Linux 4 (such as 4.6 to 4.7) to Red Hat Enterprise Linux 4.8, it is recommended that you do so using Red Hat Network, whether through the hosted web user interface or Red Hat Network Satellite. If you are upgrading a system with no available network connectivity, use the "Upgrade" functionality of Anaconda. However, note that Anaconda has limited abilities to handle issues such as dependencies on additional repositories or third-party applications. Further, Anaconda reports installation errors in a log file, not interactively. As such, Red Hat recommends that when upgrading offline systems, you should test and verify the integrity of your upgrade configuration first. Be sure to carefully review the update log for errors before applying the upgrade to your production environment. In-place upgrades between major versions of Red Hat Enterprise Linux (for example, upgrading from Red Hat Enterprise Linux 3 to Red Hat Enterprise Linux 4.8) is not supported. While the "Upgrade" option of Anaconda allows you to perform this, there is no guarantee that the upgrade will result in a working installation. In-place upgrades across major releases do not preserve all system settings, services, and custom configurations. For this reason, Red Hat strongly recommends that you perform a fresh installation when planning to upgrade between major versions. 3.1. All Architectures **IMPORTANT** If you are copying the contents of the Red Hat Enterprise Linux 4.8 CD-ROMs (in preparation for a network-based installation, for example) be sure you copy the CD-ROMs for the operating system only. Do not copy the Supplementary CD-ROM, or any of the layered product CD-ROMs, as this will overwrite files necessary for Anaconda’s proper operation. These CD-ROMs must be installed after Red Hat Enterprise Linux is installed. **Bugzilla #205295** The version of GRUB shipped with Red Hat Enterprise Linux 4 (and all updates) does not support software mirroring (RAID1). As such, if you install Red Hat Enterprise Linux 4 on a RAID1 partition, the bootloader will be installed in the first hard drive instead of the master boot record (MBR). This will render the system unbootable. If you wish to install Red Hat Enterprise Linux 4 on a RAID1 partition, you should clear any pre-existing bootloader from the MBR first. **Bugzilla #222958** When installing Red Hat Enterprise Linux 4 in Text Mode on systems that use flat-panel monitors and some ATI cards, the screen area may appear shifted. When this occurs, some areas of the screen will be obscured. If this occurs, perform the installation with the parameter `linux nofb`. **Bugzilla #445835** When upgrading from Red Hat Enterprise Linux 4.6 to this release, `minilogd` may log several SELinux denials. These error logs are harmless, and can be safely ignored. **Bugzilla #430476** Previously, in the Anaconda kickstart documentation (located at: `/usr/share/doc/anaconda-<anaconda-version>/kickstart-docs.txt`), the section detailing the `--driveorder` option in a kickstart file stated: > Specify which drive is first in the BIOS boot order. However, the `--driveorder` option actually requires a list of all drives on the system, with the first boot device appearing first in the list. With this update, the documentation has been clarified and now reads: > Specify which drive is first in the BIOS boot order. The ordered list must include all the drives in the system. When using the `--driveorder` option in a kickstart file The ordered list must include all the drives in the system. ### 3.2. ia64 Architectures **Bugzilla #163910** In this update, the 64-bit Intel Itanium2 architecture includes runtime support for 32-bit applications through the use of Intel’s IA-32 Execution Layer. The IA-32 Execution Layer is provided on the Extras disc for the Intel Itanium2 architecture. In addition, a set of 32-bit libraries and applications are provided on a separate 32-bit Compatibility Layer disc. The IA-32 Execution Layer and 32-bit compatibility packages together provide a runtime environment for 32-bit applications on the 64-bit native distribution. To install the IA-32 Execution Layer and required 32-bit compatibility packages, follow these steps: 1. Install Red Hat Enterprise Linux 4.8 for the Intel Itanium2 Architecture. 2. Insert the Red Hat Enterprise Linux 4 Extras CD, which contains the `ia32el` package. 3. After the system has mounted the CD, change to the directory containing the Extras packages. For example: ```bash cd /media/cdrom/RedHat/RPMS/ ``` 4. Install the `ia32el` and `ksh` packages: ```bash rpm -Uvh ia32el-<version>.ia64.rpm ksh-<version>.ia64.rpm ``` where `<version>` is the respective versions of the `ia32el` and `ksh` packages to be installed. 5. Eject the Extras CD: ```bash eject /media/cdrom ``` 6. To verify the installation of the 32-bit compatibility layer and libraries after installation, confirm if the `/emul` directory exists and contains files. 7. To verify that the 32-bit compatibility mode is in effect, type the following in a shell prompt: ```bash service ia32el status ``` 8. At this point you can install compatibility libraries by inserting the 32-bit Compatibility Layer disc. You may choose to install all of the packages available on the disc or choose the particular packages required in order to provide runtime support for your 32-bit applications. --- 4. FEATURE UPDATES 4.1. All Architectures **Bugzilla #469924** Systemtap is now a fully supported feature in Red Hat Enterprise Linux 4. systemtap provides a free software (GPL) infrastructure to simplify the gathering of information about the running Linux system. This assists diagnosis of a performance or functional problem. With the help of systemtap, developers no longer need to go through the tedious and disruptive sequence of instrument, recompile, install, and reboot that may be otherwise required to collect data. Note that some features of systemtap for newer Red Hat Enterprise Linux or Linux systems will not work on Red Hat Enterprise Linux 4 due to missing kernel features. The absence of the kernel utrace precludes support for any user-space probing. **Bugzilla #459041** dmidecode gives information about BIOSes and motherboard revisions. The version of kernel-utils supplied with this advisory updates dmidecode from version 2.2 to version 2.9. This version identifies newer processors, PCI-express slots and devices, and blade chassis. It also offers enhanced support for the SMBIOS v2.6 specification. **Bugzilla #453642** a new version of kernel-utils is included in this release, updating the Intel microcode file to version 20080910, to support newer Intel processors. **Bugzilla #447979** smartmontools has been extended to support newer CCISS controllers found in newer HP ProLiant hardware. **Bugzilla #460904** Samba package has been rebased to the upstream version 3.0.33. The 3.0.x version series is a The Samba package has been rebased to the upstream version 3.0.33. The 3.0.x version series is a bugfix only branch of the Samba code base. By rebasing to 3.0.33 we will include a number of important bug fixes and security fixes. No new features will be added by this rebase. For more information on the upstream fixes provided by this rebase, refer to the Samba Release Notes: http://samba.org/samba/history/samba-3.0.33.html **Bugzilla #454833** ipmitool has been updated to the upstream version 1.8.11, which provides several bug fixes and enhancements over the previous release, including the following: - Documentation update - Bugfixes for SDR/FRU, SOL and many others - New commands and options Please note that behaviour of the `-K` command line switch has changed from prompt for Kg key to read Kg key from environment variable. The `-Y` flag now behaves as the `-K` did prior to this update. ### 5. KERNEL-RELATED UPDATES #### 5.1. All Architectures **Bugzilla #467714** The ibmphp module is not safe to unload. Previously, the mechanism that prevented the ibmphp module from unloading was insufficient, and eventually triggered a bug halt. With this update, the method to prevent this module from unloading has been improved, preventing the bug halt. However, attempting to unload the module may produce a warning in the message log, indicating that the module is not safe to unload. This warning can be safely ignored. **Bugzilla #461564** With this update, physical memory will be limited to 64GB for 32-bit x86 kernels running on systems with more than 64GB. The kernel splits memory into 2 separate regions: Lowmem and Highmem. Lowmem is mapped into the kernel address space at all times. Highmem, however, is mapped into a kernel virtual window a page at a time as needed. If memory I/Os are allowed to exceed 64GB, the mem_map (also known as the page array) size can approach or even exceed the size of Lowmem. If this happens, the kernel panics during boot or starts prematurely. In the latter case, the kernel fails to allocate kernel memory after booting and either panics or hangs. **Bugzilla #246233** Previously, if a user pressed the arrow keys continuously on a Hardware Virtual Machine (HVM) an interrupt race condition between the hardware interrupt and timer interrupt was encountered. As a result, the keyboard driver reported unknown keycode events. With this update, the i8042 polling timer has been removed, which resolves this issue. **Bugzilla #435705** With this update, the diskdump utility (which provides the ability to create and collect vmcore Kernel dumps) is now supported for use with the sata_svw driver. **Bugzilla #439043** With this update, the "swap_token_timeout" parameter has been added to /proc/sys/vm. This file contains valid hold time of swap out protection token. The Linux Virtual Memory (VM) subsystem has a token based thrashing control mechanism and uses the token to prevent unnecessary page faults in thrashing situation. The unit of the value is in `second`. The value would be useful to tune thrashing behavior. Setting it to 0 will disable the swap token mechanism. Bugzilla #439431 Previously, when a NFSv4 (Network File System Version 4) client encountered issues while processing a directory using readdir(), an error for the entire readdir() call was returned. With this update, the fattr4_rdatr_error flag is now set when readdir() is called, instructing the server to continue on and only report an error on the specific directory entry that was causing the issue. Bugzilla #443655 Previously, the NFS (Network File System) client was not handling malformed replies from the readdir() function. Consequently, the reply from the server would indicate that the call to the readdir() function was successful, but the reply would contain no entries. With this update, the readdir() reply parsing logic has been changed, such that when a malformed reply is received, the client returns an EIO error. Bugzilla #448076 The RPC client stores the result of a portmap call at a place in memory that can be freed and reallocated under the right circumstances. However, under some circumstances, the result of the portmap call was freed from memory too early, which may have resulted in memory corruption. With this update, reference counting has been added to the memory location where the portmap result is stored, and will only free it after it has been used. Bugzilla #450743 Under some circumstances, the allocation of some data structures for RPC calls may have been blocked when the system memory was low. Consequently, deadlock may have been encountered under heavy memory pressure when there were a large number of NFS pages awaiting writeback. With this update, the allocation of these data structures is now non-blocking, which resolves this issue. Bugzilla #451088 Previously, degraded performance may have been encountered when writing to a LVM mirrored volume synchronously (using the O_SYNC flag). Consequently, every write I/O to a mirrored volume was delayed by 3ms, resulting in the mirrored volume being approximately 5-10 times slower than a linear volume. With this update, I/O queue unplugging has been added to the dm-raid1 driver, and the performance of mirrored volumes has been improved to be comparable with that of linear volumes. Bugzilla #476997 A new tuning parameter has been added to allow system administrators to change the max number of modified pages kupdate writes to disk per iteration each time it runs. This new tunable (/proc/sys/vm/max_writeback_pages) defaults to a value of 1024 (4MB) so that a maximum of 1024 pages get written out by each iteration of kupdate. Increasing this value alters how aggressively kupdate flushes modified pages and decreases the potential amount of data loss if the system crashes between kupdate runs. However, increasing the max_writeback_pages value may have negative performance consequences on systems that are sensitive to I/O loads. Bugzilla #456911 A new allowable value has been added to the `/proc/sys/kernel/wake_balance` tunable parameter. Setting `wake_balance` to a value of 2 will instruct the scheduler to run the thread on any available CPU rather than scheduling it on the optimal CPU. Setting this kernel parameter to 2 will force the scheduler to reduce the overall latency even at the cost of total system throughput. **Bugzilla #475715** When checking a directory tree, the kernel module could, in some circumstances, incorrectly decide the tree was not busy. An active offset mount with an open file handle being used for expires caused the file handle to not count toward the busyness check. This resulted in mount requests being made for already mounted offsets. With this update, the kernel module check has been corrected and incorrect mount requests are no longer generated. **Bugzilla #453470** During system initialization, the CPU vendor was detected after the initialization of the Advanced Programmable Interrupt Controllers (APICs). Consequently, on x86_64 AMD systems with more than 8 cores, APIC clustered mode was used, resulting in suboptimal system performance. With this update, the CPU vendor is now queried prior to initializing the APICs, resulting in APIC physical flat mode being used by default, which resolves this issue. **Bugzilla #462459** The Common Internet File System (CIFS) code has been updated in Red Hat Enterprise Linux 4.8, fixing a number of bugs that had been repaired in upstream, including the following change: Previously, when mounting a server without Unix extensions, it was possible to change the mode of a file. However, this mode change could not be permanently stored, and may have changed back to the original mode at any time. With this update, the mode of the file cannot be temporarily changed by default; `chmod()` calls will return success, but have no effect. A new mount option, `dynperm` needs to be used if the old behavior is required. **Bugzilla #451819** Previously, in the kernel, there was a race condition may have been encountered between `dio_bio_end_aio()` and `dio_await_one()`. This may have lead to a situation where direct I/O is left waiting indefinitely on an I/O process that has already completed. With this update, these reference counting operations are now locked so that the submission and completion paths see a unified state, which resolves this issue. **Bugzilla #249775** Previously, upgrading a fully virtualized guest system from Red Hat Enterprise Linux 4.6 (with the `kmod-xenpv` package installed) to newer versions of Red Hat Enterprise Linux 4 resulted in an improper module dependency between the built-in kernel modules: `xen-vbd.ko` & `xen-vnif.ko` and the older `xen-platform-pci.ko` module. Consequently, file systems mounted via the `xen-vbd.ko` block driver, and guest networking using the `xen-vnif.ko` network driver would fail. In Red Hat Enterprise Linux 4.7, the functionality in the `xen-platform-pci.ko` module was built-in to the kernel. However, when a formally loadable kernel module becomes a part of the kernel, the symbol dependency check for existing loadable modules is not accounted for in the module-init-tools correctly. With this update, the `xen-platform-pci.ko` functionality has been removed from the built-in kernel and placed back into a loadable module, allowing the module-init-tools to check and create the proper dependencies during a kernel upgrade. **Bugzilla #463897** Previously, attempting to mount disks or partitions in a 32-bit Red Hat Enterprise Linux 4.6 fully... virtualized guest using the paravirtualized block driver(xen-vbd.ko) on a 64-bit host would fail. With this update, the block front driver (block.c) has been updated to inform the block back driver that the guest is using the 32-bit protocol, which resolves this issue. **Bugzilla #460984** Previously, installing the pv-on-hvm drivers on a bare-metal kernel automatically created the /proc/xen directory. Consequently, applications that verify if the system is running a virtualized kernel by checking for the existence of the /proc/xen directory may have incorrectly assumed that the virtualized kernel is being used. With this update, the pv-on-hvm drivers no longer create the /proc/xen directory, which resolves this issue. **Bugzilla #455756** Previously, paravirtualized guests could only have a maximum of 16 disk devices. In this update, this limit has been increased to a maximum of 256 disk devices. **Bugzilla #523930** In some circumstances, write operations to a particular TTY device opened by more than one user (eg, one opened it as /dev/console and the other opened it as /dev/ttyS0) were blocked. If one user opened the TTY terminal without setting the O_NONBLOCK flag, this user’s write operations were suspended if the output buffer was full or if a STOP (Ctrl-S) signal was sent. As well, because the O_NONBLOCK flag was not respected, write operations for user terminals opened with the O_NONBLOCK flag set were also blocked. This update re-implements TTY locks, ensuring O_NONBLOCK works as expected, even if a STOP signal is sent from another terminal. **Bugzilla #519692** Previously, the get_random_int() function returned the same number until the jiffies counter (which ticks at a clock interrupt frequency) or process ID (PID) changed, making it possible to predict the random numbers. This may have weakened the ASLR security feature. With this update, get_random_int() is more random and no longer uses a common seed value. This reduces the possibility of predicting the values get_random_int() returns. **Bugzilla #518707** ib_mthca, the driver for Host Channel Adapter (HCA) cards based on the Mellanox Technologies MT25408 InfiniHost III Lx HCA integrated circuit device, uses kmalloc() to allocate large bitmasks. This ensures allocated memory is a contiguous physical block, as is required by DMA devices such as these HCA cards. Previously, the largest allowed kmalloc() was a 128kB page. If ib_mthca was set to allocate more than 128kB (for example, by setting the num_mutt option to "num_mutt=2097152", causing kmalloc() to allocate 256kB) the driver failed to load, returning the message ``` Failed to initialize memory region table, aborting. ``` This update alters the allocation methods of the ib_mthca driver. When mthca_buddy_init() wants more than a page, memory is allocated directly from the page allocator, rather than using kmalloc(). It is now possible to pin large amounts of memory for use by the ib_mthca driver by increasing the values assigned to num_mutt and num_mtt. **Bugzilla #519446** Previously, there were some instances in the kernel where the __ptrace_unlink() function (part of the ptrace system call) used REMOVE_LINKS and SET_LINKS, rather than add_parent and remove_parent, while changing the parent of a process. This approach could abuse the global process list and, as a consequence, create deadlocked and unkillable processes in some circumstances. With this update, \_ptrace\_unlink() now uses add\_parent and remove\_parent in every instance, ensuring that deadlocked and unkillable processes cannot be created. **NOTE** Unkillable or deadlocked processes created by this bug had no effect on system availability. 5.2. x86-64 Architectures **Bugzilla #437881** Previously, there was a missing sign extension in the x86\_64 ptrace code that may have caused gdb to fail on the x86\_64 architecture when debugging an i386 application. With this update, the missing sign extension is now correctly extended, which resolves this issue. 5.3. s390x Architectures **Bugzilla #249775** On Red Hat Enterprise Linux 4.8, N\_Port ID Virtualization (NPIV) for System z guests using zFCP is now enabled. NPIV allows a Fibre Channel HBA to log in multiple times to a Fibre Channel fabric using a single physical port (N\_Port). With this functionality, a Storage Area Network (SAN) administrator can assign one or more logical unit numbers (LUNs) to a particular System z guest, making that LUN inaccessible to others. For further information, see "Introducing N\_Port Identifier Virtualization for IBM System z9, REDP-4125" available at [http://www.redbooks.ibm.com/abstracts/redp4125.html](http://www.redbooks.ibm.com/abstracts/redp4125.html) 6. DRIVER UPDATES 6.1. All Architectures **Bugzilla #452846** The Intel\® High Definition Audio (HDA) driver in ALSA has been updated. This update improves audio support for newer hardware with HDA integrated audio. **Bugzilla #479408** Previously, network devices using the forcedeth driver may have stopped responding while doing rcp command from multiple clients. With this update, the forcedeth driver has been updated, which resolves this issue. **Bugzilla #441707** Previously, the Automatic Direct Memory Access (ADMA) mode was enabled by default in the sata\_nv driver. Consequently, device errors and timeouts may have been encountered with some devices that utilize the sata\_nv driver. With this update, ADMA mode is now disabled by default, which resolves this issue. **Bugzilla #446215** The drivers for virtio, the platform for I/O virtualization in KVM, has been backported to Red Hat Enterprise Linux 4.8 from Linux Kernel 2.6.27. These drivers will enable KVM guests to achieve higher levels of I/O performance. Various user space components such as: anaconda, kudzu, lvm, selinux and mkinitrd have also been updated to support virtio devices. Bugzilla #451966 The r8169 driver has been updated to provide support for newer network chipsets. With this update, all variants of RTL810x/RTL8168(9) are now supported in Red Hat Enterprise Linux 4.8. Bugzilla #452163 The mptsas driver has been updated to version 3.12.29.00. This update includes bug fixes and enables the following new features: - Dual Port support. - SAS chip Power Management. Bugzilla #452271 The lpfc driver has been updated to version number to 8.0.16.46. This update applies several bug fixes and enhancements, including: - support for FCoE LP21000 HBAs - support for HBAnyware 4.0 Bugzilla #455297 The megaraid_sas driver for SAS based RAID controllers has been updated to version 4.01-RH1. Several bug fixes and improvements are applied by this update, including: - Added support for the LSI Generation 2 Controllers (0078, 0079) - Added a command to shutdown DCMD in the shutdown routine to improve firmware shutdown. - A bug that caused unexpected interrupts in the hardware Linux driver has been fixed. Bugzilla #454838 The eHEA ethernet device driver for IBM eServer System P has been updated to version 0078-08. Bugzilla #490503 The EHCA infinband device driver will not be supported for Red Hat Enterprise Linux 4.8 and all future Red Hat Enterprise Linux 4 releases. 6.2. s390x Architectures Bugzilla #448777 Systems using zFCP for access to SCSI disks on Red Hat Enterprise Linux 4 require a hardware fibre channel switch to be connected between the mainframe and disk storage. This update enables point-to-point connections, which are fibre connections directly from the mainframe to the disk storage. While connection to a fibre channel switch is still supported, it is no longer required. 7. TECHNOLOGY PREVIEWS Technology Preview features are currently not supported under Red Hat Enterprise Linux 4.8 subscription services, may not be functionally complete, and are generally not suitable for production use. However, these features are included as a customer convenience and to provide the feature with wider exposure. Customers may find these features useful in a non-production environment. Customers are also free to provide feedback and functionality suggestions for a technology preview feature before it becomes fully supported. Erratas will be provided for high-severity security issues. During the development of a technology preview feature, additional components may become available to the public for testing. It is the intention of Red Hat to fully support technology preview features in a future release. For more information on the scope of Technology Previews in Red Hat Enterprise Linux, please view the Technology Preview Features Support Scope page on the Red Hat website. OpenOffice 2.0 OpenOffice 2.0 is now included in this release as a Technology Preview. This suite features several improvements, including ODF and PDF functionalities, support for digital signatures and greater compatibility with open suites in terms of format and interface. In addition to this, the OpenOffice 2.0 spreadsheet has enhanced pivot table support, and can now handle up to 65,000 rows. For more information about OpenOffice 2.0, please refer to http://www.openoffice.org/dev_docs/features/2.0/index.html. 8. RESOLVED ISSUES 8.1. All Architectures Bugzilla #452919 Previously, if the Red Hat Network applet was used to re-register the client to a different Red Hat Satellite Server, the applet would continue to show updates that had been available on the previous server, even if they were not available on the current server. The /etc/sysconfig/rhn/rhn-applet would not change to reflect the details of the new server. The version of the applet provided with this update associates a cache of updates with a server url, and therefore ensure that the updates displayed to the user are actually available. This version can also detect when its configuration file has changed. If such a change is detected, the applet will automatically reload the configuration variables and create new server connections. Bugzilla #454690 On some SGI Altix systems that feature the IOC4 multi-function device, you may encounter problems when using attached IDE devices (such as CD-ROM drives). This is caused by a bug in the sgiioc4 IDE driver, which prevents some devices from being detected properly on system boot. You can work around this bug by manually loading the driver, which in turn allows attached IDE devices to be detected properly. To do so, run the following command as root: /sbin/modprobe sgiioc4 Bugzilla #454690 sysreport.legacy used $HOME as its root directory. In case this environment variable did not exist or the directory it referred to was not writable, sysreport.legacy could not generate its report and would exit with the message Cannot make temp dir. Sysreport.legacy now uses a randomly created directory as its root directory and therefore can generate a report even on a system without a usable $HOME. Bugzilla #476767 The automount daemon used fixed size buffer of 128 bytes long to receive information from the SIOCGIFCONF ioct1 about local interfaces when testing for the proximity of a host corresponding to a given mount. Since the details of each interface are 40 bytes long, the daemon could receive information on no more than three local interfaces. If the host corresponding to the mount had an address that was local but did not correspond to one of the three interfaces the proximity would be classified incorrectly. The automount daemon now dynamically allocates a buffer, ensuring that it is large enough to contain information on all interfaces on the system providing the ability to correctly detect proximity of a host given for an NFS mount. Bugzilla #465237 Automount map entries that refer to multiple hosts in the mount location (replicated mount), the automount daemon probes a list of remote hosts for their proximity and NFS version. If hosts fail to respond, they are removed from the list. If no remote hosts reply at all, the list may become empty. Previously, the daemon did not check if the list was empty following the initial probe which would lead to a segmentation fault (by dereferencing a NULL pointer). This check has been added. Bugzilla #444942 d the ttfonts-zh_CN package formerly included the Zhong Yi Song TrueType font. The copyright in this font belongs to Beijing Zhong Yi Electronics Co., which has licensed Red Hat Inc. to distribute the font only in products and software under the Red Hat name. The inclusion of this font in ttfonts-zh_CN would therefore preclude Red Hat from freely distributing this package. The Zhong Yi Song TrueType font is still available to Red Hat customers via the Red Hat Network and the Supplementary CD in the fonts-chinese-zyson package. Bugzilla #457228 multipathd crashed with a status of with a multipathd dead but pid file exists when multipath was configured for 1024 or more paths, because it was unable to open a file descriptor for each path. This may also have caused error calling out /sbin/mpath_prio_ontap /dev/[device] errors. Now, a new multipath.conf parameter, max_fds, allows end-users to set the maximum number of file descriptors that the multipathd process can have open, or to use max to set the number to the system maximum. Setting max_fds to either a sufficiently high number or to max avoids this crash in multipathd. Bugzilla #457552 Previously, when using the accraid driver with an Adaptec 2120S or Adaptec 2200S controller, the system may have failed to bootup, returning the error: aac_srb:acc_fib_send failed with status 8195. With this update, the accraid driver has been updated, which resolves this issue. Bugzilla #453150 SOS is a set of tools that gathers information about a system’s hardware and current configuration. The information can then be used for diagnostic purposes and debugging. With this update, the reports generated by sosreport now include five types of information that were not previously collected: - the content of `/var/log/cron*` and the output of `crontab -l` to show what was running at the time that the problem occurred. - partition information from parted instead of what was previously collected from fdisk, since parted can collect partition information in situations where fdisk cannot (for example, GUID partitions). - output from `dumpe2fs -l`. - the content of `/etc/inittab`. - output from `"/sbin/service --status-all"` to show the current status of services. Previously, only their settings at boot time were collected (from "chkconfig --list"). **Bugzilla #453999** `automount` uses `umount(8)` when expiring mounts and `umount(8)` can wait indefinitely for a server to respond. This can lead to the expire being blocked causing mounts not to be expired for a long period of time in the same `/usr/sbin/automount` process (that is, the mount that the given automount process is managing). Consequently, if a server was unreachable, then automount would not unmount any expired mounts, even on the servers that are responding. Systems can then be left with a large number of mounts that can be expired but are not. Automount now includes a command line option to specify a time for automount to wait before giving up and moving on to remaining mounts. Expired mounts can therefore be unmounted even if some servers do not respond. **Bugzilla #479016** The `netpbm` package has been updated to fix the following bugs: - Several utilities shipped with `netpbm` did not accept files from standard input even though this method was in accordance with documentation. With this update, this issue has been resolved. - Several utilities shipped with `netpbm` may have crashed during processing of image files. With this update, this issue has been resolved. **Bugzilla #490104** `the ICQ Internet message protocol servers recently changed and now require clients to use a newer version of the ICQ protocol. Logging in to ICQ with `Pidgin 2.5.2` (the version previously shipped with Red Hat Enterprise Linux 4) failed with an error message as a result. With this update, `Pidgin` has been updated to version 2.5.5, which resolves this issue. **Bugzilla #479692** Previously, the Red Hat Knowledgebase article documenting Fibre Channel rescan in Red Hat Enterprise Linux 4 was not accurate. This procedure has now been updated, and can be viewed at: http://kbase.redhat.com/faq/docs/DOC-3942 **Bugzilla #422371** After successfully connecting to an SSH server, the server may return a text based banner back to the SSH client. Consequently, if `gftp` (a graphical ftp client) attempted to connect (via SFTP) to an SSH server that returns a banner, gftp would interpret the banner as an error, and close the connection. With this update, `gftp` has been updated to version 2.0.18, allowing connections to servers with banners. Bugzilla #452257 When uploading a single file to a NFS directory, the timestamp indicating the modification and access times of the file may not have been recorded correctly. With this update, the timestamp is now always updated, which resolves this issue. Bugzilla #453033 The probing code in kudzu for PCI devices would not properly find some modules that work by binding to specific PCI classes, notably, the sgio4c4 driver on SGI Altix systems. Without these modules loaded, the system would not detect devices that depended on the driver. A new version of the probing code is included in this updated package, which is able to successfully find the affected modules. 9. KNOWN ISSUES 9.1. All Architectures Bugzilla #484117 The Logical Volume Manager in Red Hat Enterprise Linux 4.8 reports file descriptor leaks, resulting in the following error returned to the installation output: File descriptor NUM (socket:XXXX) leaked on lvm invocation. This message can be safely ignored. Bugzilla #468097 When installing Red Hat Enterprise Linux 4 through an Network File System (NFS) server, the installer is unable to correctly close the NFS mount points. This might cause the NFS server to misbehave. In these cases Red Hat suggests the use of an HTTP server for installations. Bugzilla #468097 On systems where the BIOS is able to do both legacy (acpi) and native (pciehp) PCI hotplugging, it is necessary for the administrator to choose a preferred method and explicitly prevent Red Hat Enterprise Linux 4 from loading the module for the undesired method. This is done by blacklisting the undesired module in /etc/modprobe.conf. Bugzilla #451164 Hardware testing for the Mellanox MT25204 has revealed that an internal error occurs under certain high-load conditions. When the ib_mthca driver reports a catastrophic error on this hardware, it is usually related to an insufficient completion queue depth relative to the number of outstanding work requests generated by the user application. Although the driver will reset the hardware and recover from such an event, all existing connections at the time of the error will be lost. This generally results in a segmentation fault in the user application. Further, if opensm is running at the time the error occurs, then you need to manually restart it in order to resume proper operation. Bugzilla #443795 A bug in previous versions of openmpi and lam may prevent you from upgrading these packages. This same bug may cause up2date to fail when upgrading all packages. This bug manifests in the following error when attempting to upgrade openmpi or lam: ``` error: %preun(openmpi-[version]) scriptlet failed, exit status 2 ``` This bug also manifests in the following error (logged in /var/log/up2date) when attempting to upgrade all packages through up2date: ``` up2date Failed running rpm transaction - %pre %pro failure ?. ``` As such, you need to manually remove older versions of openmpi and lam first in order to avoid these errors. To do so, use the following rpm command: ``` rpm -qa | grep '^openmpi-|^lam-' | xargs rpm -e --noscripts --allmatches ``` **Bugzilla #430494** When a LUN is deleted on a configured storage system, the change is not reflected on the host. In such cases, lvm commands will hang indefinitely when dm-multipath is used, as the LUN has now become stale. To work around this, delete all device and mpath link entries in /etc/lvm/.cache specific to the stale LUN. To find out what these entries are, run the following command: ``` ls -l /dev/mpath | grep <stale LUN> ``` For example, if `<stale LUN>` is 3600d0230003414f30000203a7bc41a00, the following results may appear: ``` lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00 -> ../dm-4 lrwxrwxrwx 1 root root 7 Aug 2 10:33 /3600d0230003414f30000203a7bc41a00p1 -> ../dm-5 ``` This means that 3600d0230003414f30000203a7bc41a00 is mapped to two mpath links: dm-4 and dm-5. As such, the following lines should be deleted from /etc/lvm/.cache: ``` dev/dm-4 dev/dm-5 dev/mapper/3600d0230003414f30000203a7bc41a00 dev/mapper/3600d0230003414f30000203a7bc41a00p1 dev/mpath/3600d0230003414f30000203a7bc41a00 dev/mpath/3600d0230003414f30000203a7bc41a00p1 ``` **Bugzilla #195685** If you need to use the hp_sw kernel module, install the updated device-mapper-multipath package. You also need to properly configure the HP array to correctly use active/passive mode and recognize connections from a Linux machine. To do this, perform the following steps: 1. Determine what the world wide port name (WWPN) of each connection is by using show connections. Below is a sample output of show connections on an HP MSA1000 array with two connections: ``` Connection Name: <Unknown> ``` Host WWNN = 200100E0-8B3C0A65 Host WWPN = 210100E0-8B3C0A65 Profile Name = Default Unit Offset = 0 Controller 2 Port 1 Status = Online Connection Name: <Unknown> Host WWNN = 200000E0-8B1C0A65 Host WWPN = 210000E0-8B1C0A65 Profile Name = Default Unit Offset = 0 Controller 1 Port 1 Status = Online 2. Configure each connection properly using the following command: ``` add connection [connection name] WWPN=WWPN ID profile=profile OFFSET=unit offset ``` Note that [connection name] can be set arbitrarily. Using the given example, the proper commands should be: ``` add connection foo-p2 WWPN=210000E0-8B1C0A65 profile=Linux OFFSET=0 add connection foo-p1 WWPN=210100E0-8B3C0A65 profile=Linux OFFSET=0 ``` 3. Run `show connections` again to verify that each connection is properly configured. As per the given example, the correct configuration should be: ``` Connection Name: foo-p2 Host WWNN = 200000E0-8B1C0A65 Host WWPN = 210000E0-8B1C0A65 Profile Name = Linux Unit Offset = 0 Controller 1 Port 1 Status = Online Connection Name: foo-p1 Host WWNN = 200100E0-8B3C0A65 Host WWPN = 210100E0-8B3C0A65 Profile Name = Linux Unit Offset = 0 Controller 2 Port 1 Status = Online ``` **Bugzilla #449648** Red Hat discourages the use of quota on EXT3 file systems. This is because in some cases, doing so can cause a deadlock. Testing has revealed that kjournald can sometimes block some EXT3-specific callouts that are used when quota is running. As such, Red Hat does not plan to fix this issue in Red Hat Enterprise Linux 4, as the modifications required would be too invasive. Note that this issue is not present in Red Hat Enterprise Linux 5. **Bugzilla #451164** Hardware testing for the Mellanox MT25204 has revealed that an internal error occurs under certain high-load conditions. When the ib_mthca driver reports a catastrophic error on this hardware, it is usually related to an insufficient completion queue depth relative to the number of outstanding work requests generated by the user application. Although the driver will reset the hardware and recover from such an event, all existing connections at the time of the error will be lost. This generally results in a segmentation fault in the user application. Further, if opensm is running at the time the error occurs, then you need to manually restart it in order to resume proper operation. Bugzilla #452578 The Desktop Sharing connection icon displays its context menu when you double-click it, not when you right-click it. All other icons display their context menus when you right-click on them. Bugzilla #451873 If the ib_ehca InfiniBand driver is loaded in port auto-detection mode (using module parameter nr_ports=-1), the IP-over-InfiniBand network interfaces (ibX) might become available too late. When this occurs, the ifup ibX command issued from the openibd startup script will fail; consequently, the ibX interface will not become available. When this occurs, use the command rcnetwork restart to fix the problem. Bugzilla #451873 In the IBM Redbook “Implementing InfiniBand in IBM System p (SG247351) manual, Table 6-3 (on page 220 of the PDF version) describes debug code bit definitions, where several HCA error indicator bits are also described. Note that with eHCA2 adapters, bits 46 and 47 of these error indicator bits might return false positives. Bugzilla #366961 On HP ICH10 workstations, audio is only enabled through the front 3.5mm jacks. As such, to receive any audio output or use recording, you should plug in your headphones, speakers, or microphones to the front jacks. At present, the rear jacks, internal speaker, and master volume for this workstation do not work. Bugzilla #429727 With this update, the default PCI detection and ordering mode for the following models have changed: - HP Proliant DL 580 G5 - HP Proliant DL 385 G2 - HP Proliant DL 585 G2 These models use a device scanning and enumeration mode which is not the default for Red Hat Enterprise Linux 4 or 5. The mode used by these HP Proliant models could result in add-on cards being detected and added prior to onboard/internal devices. This unexpected ordering could cause difficulties when installing new instances of Red Hat Enterprise Linux, adding hardware, and maintenance. The numbering of network interface cards (NIC) for the aforementioned HP Proliant models may change when they are updated with the Red Hat Enterprise Linux 4.7 kernel. The installer changes NIC numbering if the **HWADDR=MAC ADDRESS** parameter is not defined in `/etc/sysconfig/network-scripts/ifcfg-eth[X]` for each installed NICs. As such, Red Hat recommends that you ensure this parameter is defined in order to avoid any problems arising from an unexpected NIC enumeration. In addition, to avoid any NIC enumeration changes after updating these **HP Proliant** models to Red Hat Enterprise Linux 4.7, add the kernel boot parameter `pci=nomfsort` to `/boot/grub/grub.conf`. **Bugzilla #232499** When a volume group contains a mirror or snapshot, issuing the `lvchange` command with a volume group parameter may result in the following error messages: - Unable to change mirror log LV fail_secondary_mlog directly - Unable to change mirror image LV fail_secondary_mimage_0 directly - Unable to change mirror image LV fail_secondary_mimage_1 directly These messages can be safely ignored. **Bugzilla #441870** * Dell PowerEdge SCI435s* systems may hang during boot-up. To avoid this, edit the `terminal` line in `grub.conf` and replace the string `serial console` with `console serial`. **Bugzilla #456533** The updated **ixgbe** driver does not support the Intel 82598AT (*Copper Pond 10GbE*). **Bugzilla #454872** Red Hat Enterprise Linux 4.8 can detect online growing or shrinking of an underlying block device. However, there is no method to automatically detect that a device has changed size, so manual steps are required to recognize this and resize any file systems which reside on the given device(s). When a resized block device is detected, a message like the following will appear in the system logs: ``` VFS: busy inodes on changed media or resized disk sdi ``` If the block device was grown, then this message can be safely ignored. However, if the block device was shrunk without shrinking any data set on the block device first, the data residing on the device may be corrupted. It is only possible to do an online resize of a filesystem that was created on the entire LUN (or block device). If there is a partition table on the block device, then the file system will have to be unmounted to update the partition table. **Bugzilla #479467** There is a known memory leak with the **res_n* family of resolver routines (i.e. res_nquery, res_nsearch and res_nmkquery)**. Programs that use these functions will leak memory over time. It has been fixed in newer versions of glibc, however, the fix is too invasive to be applied to Red Hat Enterprise Linux 4. Programs that use these functions may need to be restarted occasionally to free memory. **Bugzilla #452513** The number of devices that can be handled during installation of Red Hat Enterprise Linux 4 depends on the size of the installation `initrd` image. Therefore, in situations where there are many devices attached to a machine (such as heavily populated Fibre Channel setups) installation will not be possible unless number of visible devices is reduced. Bugzilla #438895 The aacraid driver update that was first introduced in Red Hat Enterprise Linux 4.7 requires up to date Adaptec PERC3/Di firmware. Subsequent updates of Red Hat Enterprise Linux 4 (including this 4.8 update) require, that the PERC3/Di firmware is at version 2.8.1.7692, A13 or newer. The firmware may be obtained at the following location: http://support.dell.com/support/downloads/download.aspx? c=us&cs=555&l=en&s=biz&releaseid=R168387&SystemID=PWE_PNT_PIII_1650&servicetag=&os=WNET1&impid=-1&formatcnt=4&libid=35&fileid=228550 Bugzilla #492371 During installation anaconda may not remove all the Logical Volume Manager (LVM) metadata that exists on a system prior to installation. This extra metadata may cause LVM tools to report missing volume groups or logical volumes after installation. To work around this issue, remove the stale LVM metadata after the installation is complete. Bugzilla #481190 multipath does not silence the error messages printed by any of it's callout programs. Therefore, if multipath is run when paths are down, various error messages may be displayed. The messages that are displayed depend on the specific callout programs that multipath is using. For example, if multipath is run while there are failed scsi devices, scsi_id will print lt;H>:<B>:<T>:<L>:Unable to get INQUIRY vpd 1 page 0x0. lt;H>:<B>:<T>:<L>:sg_io failed status 0x0 0x1 0x0 0x0 Or, if multipath -ll is run while an EMC CLARiiON is down, the mpath_prio_emc priority callout will print query command indicates error 9.2. ia64 Architectures Bugzilla #453033 On some SGI Altix systems that feature the IOC4 multi-function device, you may encounter problems when using attached IDE devices (such as CD-ROM drives). This is caused by a bug in the sgiio4 IDE driver, which prevents some devices from being detected properly on system boot. You can work around this bug by manually loading the driver, which in turn allows attached IDE devices to be detected properly. To do so, run the following command as root: /sbin/modprobe sgiio4 A. REVISION HISTORY Revision 2-4.400 2013-10-31 Rüdiger Landmann Rebuild with publican 4.0.0 Revision 2-4 2012-07-18 Anthony Towns Rebuild for Publican 3.0 Revision 2-3 Tue Feb 8 2011 Michael Hideo BZ#627110 BZ#627111 Revision 1.0-0 Thu May 07 2009 Ryan Lerch Added Release Notes Updates for the General Availability (GA) Revision 0.1-0 Thu May 07 2009 Ryan Lerch Initial Version of the Release Notes
{"Source-Url": "https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/4/pdf/4.8_release_notes/red_hat_enterprise_linux-4-4.8_release_notes-en-us.pdf", "len_cl100k_base": 12455, "olmocr-version": "0.1.53", "pdf-total-pages": 26, "total-fallback-pages": 0, "total-input-tokens": 55118, "total-output-tokens": 14600, "length": "2e13", "weborganizer": {"__label__adult": 0.0003657341003417969, "__label__art_design": 0.0004031658172607422, "__label__crime_law": 0.0002199411392211914, "__label__education_jobs": 0.0007023811340332031, "__label__entertainment": 0.00013816356658935547, "__label__fashion_beauty": 0.00013172626495361328, "__label__finance_business": 0.0016984939575195312, "__label__food_dining": 0.0002142190933227539, "__label__games": 0.001079559326171875, "__label__hardware": 0.00830078125, "__label__health": 0.0002892017364501953, "__label__history": 0.00023233890533447263, "__label__home_hobbies": 0.00013267993927001953, "__label__industrial": 0.0006041526794433594, "__label__literature": 0.00019431114196777344, "__label__politics": 0.0002167224884033203, "__label__religion": 0.00033593177795410156, "__label__science_tech": 0.04400634765625, "__label__social_life": 7.128715515136719e-05, "__label__software": 0.13818359375, "__label__software_dev": 0.8017578125, "__label__sports_fitness": 0.00015795230865478516, "__label__transportation": 0.00035953521728515625, "__label__travel": 0.00017368793487548828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54967, 0.1016]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54967, 0.09826]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54967, 0.86784]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 159, false], [159, 233, null], [233, 2779, null], [2779, 5170, null], [5170, 6330, null], [6330, 9160, null], [9160, 11541, null], [11541, 13944, null], [13944, 16630, null], [16630, 19958, null], [19958, 23538, null], [23538, 26874, null], [26874, 29170, null], [29170, 31169, null], [31169, 34018, null], [34018, 37340, null], [37340, 40314, null], [40314, 42841, null], [42841, 45063, null], [45063, 46838, null], [46838, 49524, null], [49524, 52421, null], [52421, 54542, null], [54542, 54967, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 159, true], [159, 233, null], [233, 2779, null], [2779, 5170, null], [5170, 6330, null], [6330, 9160, null], [9160, 11541, null], [11541, 13944, null], [13944, 16630, null], [16630, 19958, null], [19958, 23538, null], [23538, 26874, null], [26874, 29170, null], [29170, 31169, null], [31169, 34018, null], [34018, 37340, null], [37340, 40314, null], [40314, 42841, null], [42841, 45063, null], [45063, 46838, null], [46838, 49524, null], [49524, 52421, null], [52421, 54542, null], [54542, 54967, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54967, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54967, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 159, 3], [159, 233, 4], [233, 2779, 5], [2779, 5170, 6], [5170, 6330, 7], [6330, 9160, 8], [9160, 11541, 9], [11541, 13944, 10], [13944, 16630, 11], [16630, 19958, 12], [19958, 23538, 13], [23538, 26874, 14], [26874, 29170, 15], [29170, 31169, 16], [31169, 34018, 17], [34018, 37340, 18], [37340, 40314, 19], [40314, 42841, 20], [42841, 45063, 21], [45063, 46838, 22], [46838, 49524, 23], [49524, 52421, 24], [52421, 54542, 25], [54542, 54967, 26]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54967, 0.0]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
5f702ebf2dc52a14ad88e74f821363ffa497ffe3
Motivation Suppose you are an independent software developer, and your software package Windows Defeater®, widely available on sourceforge under a GNU GPL license, is getting an international attention and acclaim. One of your fan clubs, located in the Polish city of Łódź (pron. woodge), calls you on a Sunday afternoon, urging you to pay them a visit on Monday morning and give a talk on open source software initiative and standards. The talk is scheduled for 10am Monday at the largest regional university, and both the university President and the city Mayor have been invited to attend, and both have confirmed their arrivals. What should you do? Obviously, you want to go, but it is not so simple. You are located in a Polish city of Wrocław (pron. vrotslaf), and while only 200 km away from Łódź, there is no convenient morning train connection from Wrocław to Łódź. The only direct train leaves Wrocław at 7:18 in the morning, but arrives 11:27 at Łódź Kaliska, which is across town from the University’s Department of Mathematics and Computer Science. An earlier regional train leaves Wrocław at 5am, but with two train switches, you would arrive at Łódź Kaliska at 10:09, which is still not early enough. There are no flights connecting the two cities, but there are numerous buses, which are probably your best choice. And there is still the option of driving, as much as you love it. What we are dealing with here is a planning problem, with a number of choices, which need to be taken under consideration in turn, their outcomes analyzed and further actions determined, which bring about still more choices. What we in fact do is repeatedly **search** the space of possible actions and their outcomes. Searching is a component of all methods of artificial intelligence, and the ability of efficient searching seems to be an inherent attribute of the intelligence proper. State space representation 1. description of the state space • often the state space has the form of a Cartesian product of the domains of the problem description parameters • the space can be finite or infinite, although this does not need to correspond to the complexity of the problem (eg. consider that the state space for the game of chess is finite) • some states in the state space can be illegal (or unreachable) states 2. description of the initial state, always explicit 3. description of the goal state, explicit or implicit (goal condition) 4. description of the available state transition operators • eg. as applicability conditions and effect lists • operators may be parametrized (eg. consider a maze — one move operator, four operators, or the number of states times four) ⇒ The task is to determine the sequence of operators (and their arguments if parametrized) which lead from the initial state to (one of) the goal state. General scheme of searching the state space PROCEDURE GT(St) ; St - initial state description BEGIN UNTIL Term(St) DO ; St satisfies the goal condition BEGIN Op := first(App1Ops(St)) ; select operator applicable in state St St := Apply(Op, St) ; the result of applying Op to state St END END Although the above statement of the GT (Generate-and-Test) algorithm suggests that it always selects the first operator possible to apply in the St state, it is possible to influence this choice by an appropriate ordering of the operator list. We will call the method of choosing the operator a strategy. To have a good strategy is the key problem in searching. Blind and informed strategies A strategy may be completely general, based only on the syntactic properties of the space representation, and thus applicable to any state space search problem. Such strategies are termed **blind**. Example: a blind (literally), but perfectly useful search strategy for the maze problem is the right hand strategy. If you move along the wall, keeping contact with it using your right hand, then you will find the exit from the maze, if it only exists. A strategy may also utilize some information about the current state, which is specific to a problem domain, and requires an insight into the problem beyond its syntactic analysis. Such strategies are termed **informed**. Informed strategies take advantage of information which may not be available in a general case, and may not be understandable to a completely general search algorithm. Example: suppose we search an exit from a maze, and we know there is noise outside, but no sound sources inside the maze. Then, simply listening in all directions may be a basis for an informed search strategy (although this strategy may be efficient only in the states which are close to the exit). Irrevocable and tentative search strategies We can consider two different approaches to the problem of searching: - when an **introspection** is possible, i.e. an insight into the whole state space, or, in other words: theoretically simulating the search, or, in other words: backtracking moves, - when such possibility does not exist and moves in the state space must be made irreversibly. ⇒ Even if we have a complete and 100% correct problem description, the introspection ability may be limited by the space size (think chess). ⇒ In turn, in some problems all operators have corresponding reverse operators, which in practice gives one the ability to reverse moves, even if such possibility does not exist in theory. On the other hand, it may incur additional cost, and may lead to looping. Toy problems — missionaries and cannibals A number of toy problems have been formulated, which serve as test cases for new algorithms being developed. These problems, while simple for a human, contain some difficulty, to verify the problem-solving ability of an algorithm. One of these toy problems is the missionaries and cannibals problem: - 3 missionaries and 3 cannibals at one bank of a river, - a two-person boat, - must transfer all to the other bank of the river, while making sure that at no time or place the cannibals would outnumber the missionaries. The monkey and bananas problem Another classical toy problem in artificial intelligence is the **monkey and bananas** problem: - a monkey is in a closed room, - a bunch of bananas is hanging at a ceiling, but too high for the monkey to reach, - there is a table in the opposite corner of the room, which may be moved and climbed on to reach the bananas, if the monkey decides to do so. Short review 1. What are the elements of the state space representation of a problem? 2. What are blind and informed search strategies? What is the difference between them? The BT algorithm efficiently searches the solution space without explicitly building the search tree. The data structures it utilizes to hold the search process state are hidden (on the execution stack). It is possible to convert this algorithm to an iterative version, which builds these structures explicitly. The iterative version is more efficient computationally, but lack the clarity of the above recursive statement of the algorithm. Backtracking search — properties BT has minimal memory requirements. During the search it only keeps a single solution path (along with some context for each element of the path). Its space complexity of the average case is $O(d)$, where $d$ — the distance from the initial state to the solution (measured in the number of the operator steps). The time complexity is worse. In the worst case the BT algorithm may visit all the states in the space before finding the solution. However, it permits one to apply a strategy — informed or blind — by appropriately sorting the operator list during its creation. Another important problem with the BT algorithm is that it does not guarantee to find a solution, even if it exists. If the state space is infinite, the algorithm may select an operator at some point which leads to a subtree of the whole search tree which is infinite but contains no solution states. In this case, the algorithm will never backtrack from the wrong operator choice, and keep searching forever. Checking for repeating states One of the problems with the BT algorithm — as well as with all search algorithms — is the potential for looping. If the algorithm ever reaches a state, which it has already visited on its path from the initial state, then it will repeatedly generate the same sequence of states and may never break out of the loop. It is easy to avoid this problem. The simplest way is to check, after reaching each new state, whether that state is not present in the currently examined path from the initial state. It is also possible to check more carefully — whether the newly found state has not previously been found, and explored. For this test a set of all visited stated must be kept, a so-called Closed list. In the recursive implementation of the algorithm this list needs to be global for all the invocations of the procedure, and all newly generated states must be checked against it. Both checks incur significant computational overhead. It can be skipped in order to save time, but at the risk of looping. Search depth limiting with iterative deepening A serious problem for the BT algorithm are infinite (or very large) spaces, which it generally cannot handle. If the algorithm makes a wrong choice (of an operator), and starts exploring an infinite, or a very large, subtree which contains no solution, it may never backtrack and will not find the solution. Particularly fatal may be wrong choices made at the very beginning of the search. This is a problem not just with BT but with all “optimistic” algorithms, which prefer to go ahead as long as it is possible, and do not worry about bad moves. For many such algorithms simply limiting the search depth to some “reasonable” value is a general and effective protection against the consequences of taking wrong turns. It is, however, generally not easy to determine such “reasonable” value. Setting it too high reduces the efficiency of this countermeasure, while setting it too low runs the risk of not finding a solution when one exists. An approach used with BT, and in similar optimistic algorithms (preferring marching forward), is a variant of the above, called depth limiting with iterative deepening, or just iterative deepening. With this modification BT is complete — as long as a solution for the problem (path to the goal state) exists, the algorithm will find it. However, this modification may make BT very inefficient, for example, when the depth limit is set too low. Heuristics and static evaluation functions The algorithms presented so far are simple and do not generally require an informed strategy to work. Having and using such strategy is however always desirable. A heuristic we will call some body of knowledge about the problem domain which: - cannot be obtained from a syntactic analysis of the problem description, - may not be formally derived or justified, and which may even be false in some cases, and may lead to wrong hints for searching, - but which in general helps make good moves in exploring the search space. Having a heuristic should permit one to build informed search strategies. A general and often used scheme for constructing strategies using heuristic information is a static evaluation function. For each state it estimates its “goodness”, or a chance that a solution path exists through this state, and/or the proximity to the goal state on such path. Hill climbing approaches An evaluation function can be applied directly in searching. This leads to a class of methods called **hill climbing**. Hill climbing methods generally belong to the class of greedy algorithms. Direct application of these methods is limited to domains with a very regular evaluation function, e.g. strictly monotonic one. Applying hill climbing in practical cases typically leads to the following problems: 1. local maxima of the evaluation function 2. “plateau” areas of the evaluation function 3. oblique “ridges” of the evaluation function A simple solution which may alleviate, but seldom eliminate, these problems is to occasionally make a random move (operator) to transfer the search focus to a different area of the state space. Hill climbing is generally useful for applications with continuous parameter domains, where using other, more combinatorial, algorithms is difficult. Simulated annealing An efficient and often used variant of hill climbing is a technique called *simulated annealing*. The name refers to an industrial process of annealing, which means casting a liquid metal with a slow and gradual decreasing of temperature, allowing the metal to achieve the state of global minimum of energy, with a total particle ordering within the whole volume. The method generates random moves in addition to the basic hill climbing moves, and then decides randomly to execute them, or not, according to the probability distribution shown in the diagram. As can be seen, if the generated move improves the evaluation function value, then it is always executed. On the other hand, if it worsens the value of the current state, then it is executed with the probability of $p < 1$, which depends on how much the evaluation worsens. At the same time, during the operation of the algorithm, the “temperature” value is gradually lowered, which decreases the probability of selecting “bad” moves. The simulated annealing approach has been successfully applied to such problems as designing VLSI circuits and various other networks, allocating resources or tasks in some industrial processes, and other complex optimization processes. An important issue in its application is the selection of its parameters, such as the temperature lowering rate. Short review 1. Which requirements of the BT algorithm are more critical: computation time, or memory? Justify your answer. 2. Under what circumstances may the BT algorithm NOT find a solution, even though one exists? State your answer separately for the finite and infinite search spaces? 3. What is the phenomenon of repeating states in search algorithms? What are its possible consequences? 4. What problem is solved by the iterative deepening technique? In which cases it is necessary to use it? 5. What are main qualitative problems of gradient search algorithm (ie. excluding the computational complexity)? Graph searching Recall the iterative deepening version of the backtracking (BT) algorithm, and the problem of repeated explorations of the initial part of the search space. In order to avoid such repeated exploration one might introduce an explicit representation of the search graph, and keep in memory the explored part of the search space. Algorithms which do this are called **graph searching** algorithms. General graph searching strategies (blind): - breadth-first search strategy (BFS), - depth-first search strategy (DFS), - other strategies. An example: the 8-puzzle The 15-puzzle is popular with school children. 8-puzzle — a reduced version, suitable for testing various artificial intelligence algorithms and strategies, and presenting their operation. Breadth-first search (BFS) - Explore all the states within the distance of $d$ from the initial state $s_0$ before exploring any states at the distance $(d + 1)$ or more from $s_0$. - Always finds a solution if one only exists. - What’s more, always finds the optimal solution (ie. finds the shortest path from the initial state to any state). - Is not inherently resistant to getting trapped in state loop sequences and may require the use of the Closed list. - The space and time complexity of the algorithm are terrible, both at $O(b^d)$, where: - $b$ — average number of branches growing from a node (branching factor), - $d$ — distance from the initial state to the solution (operator steps). - Worst and average case complexity practically equal (best case likewise). - Implementation note: append newly discovered states to the end of the Open list. (Where we talk about lists of nodes, in practice often faster data structures, like hash tables, are used.) Breadth-first search — an example The diagram presents a section of a breadth-first search graph. The numbers above the state miniatures (1–26) show the node selection order for the graph expansion. Depth-first search (DFS) - Explore all the newly discovered descendant states of the current state \( n \) before returning to exploring the neighbors of the state \( n \). - Offers none of the BFS’ guaranteed properties (finding the best, or any solution). - Worst case complexity: exploring and storing the whole space. - Average case complexity: \( O(b^d) \) both in time and space. - For infinite space the only practically useful variant of this algorithm is the depth limitation with iterative deepening (but DFS graph searching is not so pathetically wasteful as was the case with the BT algorithm). - The efficiency of the algorithm may improve dramatically for the cases significantly better than the average case (particularly lucky), so it makes sense to use it when good heuristics are available. - Implementation note: prepend all the newly discovered states to the front of the Open list. A section of an “average” depth-first search graph with the depth limit of 5. The state numbers again show the order of node selection for graph expansion. Uniform-cost search (UC) In those cases, when the costs of all moves are not equal, the breadth-first search, which is based on the number of moves, obviously no longer guarantees optimality. A simple extension of the breadth-first algorithm finds the optimal path for any (positive) cost of a single move. This algorithm is called the uniform-cost (UC) search, and works by always selecting for expansion the graph node of the lowest path cost. In the case of equal costs, this is identical to breadth-first search. The optimality of the algorithm can be (trivially) proved as long as the cost of a single move is some positive value ($\geq \epsilon$). But since the algorithm is guided by the path cost, its complexity cannot be characterized as a function of $b$ and $d$. Instead, if $C^*$ denotes the cost of the optimal solution, the worst case complexity of the algorithm — both time and memory — is $O(b^{1+\lceil C^*/\epsilon \rceil})$. In the case of equal costs, this is $O(b^d)$. Search termination The goal of searching might be just to find some path to the goal, or to find the optimal path. In the former case, the algorithm may terminate when it discovers, that the state it has just reached, ie. that has been placed on the Open list, is the goal state. But can we do the same when searching for an optimal solution? The optimal search should be terminated when the algorithm has just chosen a goal node (possibly one of a few already reached goal nodes) for expansion. The expansion can then be abandoned, and the algorithm terminated, but the best known path to the goal node is the optimal solution. Since the algorithm systematically finds all cheapest paths, a decision to expand a node means, that there may not exist any cheaper paths to it. Before that happens, however, the algorithm explores cheaper paths, and there is no guarantee that it would not find a new, better path to the goal node. Best-first search The most straightforward application of a heuristic state evaluation function to graph searching leads to the **best-first search**. At any point, it chooses to expand states with the best heuristic value. With a good evaluation function, such which correctly evaluates states, and decreases in value along the path to the solution, the best-first search algorithm proceeds directly toward the goal state, wasting no time exploring any unnecessary states (graph nodes). Also with slight defects in the evaluation function, with a few values a little off, but no systematic errors, this scheme works very well in guiding the search space exploration process. The problems start when the evaluation function is wrong in a larger (perhaps infinite) part of the search space, and consequently indicates as good some states which do not lead to a solution. In such cases the best-first strategy exhibits the same problems as the depth-first search, even if the evaluation function may correctly evaluate many, or most, states. Best-first as a heuristic version of depth-first search Let us observe that the best-first approach in fact does not introduce a new systematic scheme of searching a graph. Indeed, it is a special case of depth-first search with the node-selection strategy based on the heuristic function. In other words, if a heuristic function is available, then the depth-first method is equivalent to the best-first algorithm. But, therefore, it has all the deficiencies of any depth-first approach, and it is only reasonable to use it with the depth limitation (and iterative deepening). It turns out there exists a better way of using a heuristic strategy in depth-first searching, which likewise protects against exploring infinite spaces, but is smarter than a fixed depth limit. Implementing cost-based graph searching From an implementation point of view the earlier BFS and DFS algorithms are very different from the UC and best first algorithms. In selecting nodes for expansion they use the graph geometry, and, as pointed out, may be implemented by simply appending or prepending nodes to the Open list. The node selection in the UC and best-first algorithms is based on cost functions, although computed differently, so the Open list would have to be sorted, which rules out a hash table implementation. A good data structure often useful for the Open list is a priority queue, which permits a trivial best node selection, and inexpensive additions and deletions ($O(\log(N))$). Often useful in an implementation of the UC search is a construction from the Dijkstra’s algorithm finding (all) the single-source shortest paths in a graph. The Dijkstra’s algorithm will not be covered here, but the method of recording and updating the optimal paths with backpointers is presented below. Implementing graph searching algorithms A graph searching algorithm must explore new states, try available actions, and when the goal has been reached, it must also allow one to retrieve the complete solution path, which is a sequence of operators. If the task was to find the optimal solution then so must be this sequence. In order to accomplish this it is useful to maintain a data structure of backpointers. Definition: the backpointer in each graph node points to its (immediate) predecessor on the best known path from the start node \( S_0 \). Associated with each backpointer is the total length of this path. The diagram on the left shows the structure of an example graph with the costs of state transitions. The second diagram shows the state of the search after the first step: exploring the \( S_0 \) state, finding two descendant states by two available operators (black arrows), and installing the backpointers (red broken arrows). Creating the backpointers The backpointers are easy to create and update while the graph is being explored. Whenever the algorithm adds a new arc to the graph leading to an unknown node, it automatically installs a backpointer, which is the reverse of the arc being added. The value associated with the backpointer is computed as the sum of the backpointer value of the predecessor, and the cost of the arc. The first diagram (l) shows the algorithm state after the first step from $S_0$, having added two operators (black arrows), created successor states, and installing their backpointers (red broken arrows). Next (r) the algorithm executed two further steps, discovered two more states, and installed their backpointers, computing their associated values of the paths from $S_0$ to the given node. Updating the backpointers The first diagram (l) shows the situation after the execution a few initial steps exploring the graph. The algorithm found a path to the state $S_n$, created the backpointer for this state, and recorded the length of the path (10). The second diagram (r) shows the situation after making one more step, discovering the second (alternative) path to the state $S_n$, of length 9. Since the cost of the best so far known path to $S_n$ was 10, the algorithm changed its backpointer, and stored the new value (9) with it. If, however, the length of the newly found path was worse than the best already known, the algorithm would have left the backpointer intact. Thanks to backpointers, it is possible at any time to recover the best known path to any state, tracing from the back. Short review 1. What is the difference between the uniform-cost and breadth-first search? 2. What is the difference between the depth-first and best-first search? 3. Describe the usage of the Open and Closed lists in graph search algorithms. 4. What are backpointers and how are they used in graph search algorithms? A modified node selection — the already incurred cost Consider the following deterministic state (node) evaluation functions: $h^*(n)$ — the cost of the cost-optimal path from $n$ to the goal $g^*(n)$ — the cost of the cost-optimal path from $s_0$ to $n$ Therefore: $f^*(n) := g^*(n) + h^*(n)$ $f^*(n)$ — the cost of the cost-optimal path from $s_0$ to the goal, going through $n$ Having access to the $f^*(n)$ function would allow one to always select the nodes on the optimal path from start to the goal. In fact, it would suffice to use the $h^*(n)$ function. In both cases, the agent would go directly to the goal. Unfortunately, these functions are normally not available. We are forced to use their approximations to select nodes in the graph. However, when using the approximations, then the search based on the $f^*(n)$ function does not necessarily proceed exactly like that based on the $h^*(n)$ function. A modified node selection — the A* algorithm Consider the following heuristic (approximate) state evaluation functions: \[ h(n) \] — a heuristic approximation of \( h^*(n) \) \[ g(n) \] — the cost of the best known path from \( s_0 \) to \( n \); note \( g(n) \geq g^*(n) \) \[ f(n) := g(n) + h(n) \] How does the strategy using the \( f(n) \) approximation work? If \( h(n) \) estimates the \( h^*(n) \) value very well, then the algorithm works perfectly, going directly to the goal. If, however, the \( h(n) \) function is inaccurate, and eg. reports some states to be better then they really are, then the algorithm will first head in their direction, lured by the low values of \( h(n) \), while \( g(n) \) was negligible. After some time, however, such erroneously estimated paths will stop being attractive, due to the increasing \( g(n) \) component, and the algorithm will switch its attention to more attractive nodes. The attraction of a node here is not affected by how far it is from start or from the goal. Instead it is determined only by the combined estimate of the total cost of a complete start-to-goal path running through that node. An algorithm using a strategy with the above \( f(n) \) function is called the A* algorithm. The evaluation function in the A* algorithm The $h(n)$ and $g(n)$ components of the $f(n)$ function represent the two opposite trends: the optimism ($h(n)$) and the conservatism ($g(n)$). We can freely adjust the strategy one way or the other by using the formula: $$f(n) := (1 - k) \times g(n) + k \times h(n)$$ By increasing the weight coefficient $k$ we can bias the search toward more aggressive (and risky) when, eg. we trust the $h(n)$ function and want to proceed rapidly. On the other hand, by decreasing this coefficient, we enforce a more careful exploration of the search space, moving ahead slower, but possibly compensating for some of the $h(n)$ function’s errors. Note that in the extreme cases, $k = 1$ yields the best-first search, while $k = 0$ yields the uniform-cost search. But it is the quality of the $h(n)$ function that has the biggest influence on the search process. The h(n) function properties in A* The heuristic evaluation function $h(n)$ in the A* algorithm is called **admissible** if it bounds from below the real cost function $h^*(n)$, ie. $\forall n \ h(n) \leq h^*(n)$. Admissibility means chronic underestimating of future costs, so it is also referred to as optimism. It can be proved, that whenever there exists a path from the start node to the goal, the A* with an admissible heuristic will always find the best such path. This sound nice, so is it hard to find such an admissible heuristic? Not necessarily. For example, $h(n) \equiv 0$ indeed bounds $h^*(n)$ from below for any problem. And can such a trivial heuristic be useful? The answer is: not really. Such algorithm always selects the nodes with the shortest path from $s_0$, so it is equivalent to the breadth-first (more generally: uniform-cost) search which indeed always guarantees to find the optimal solution, but, as we already know, it is not such a great algorithm. Naturally, the better $h(n)$ approximates $h^*(n)$ the more efficient the search is. In fact, it can be proved that for any two evaluation functions $h_1(n)$ and $h_2(n)$, such that for all states $h_1(n) < h_2(n) \leq h^*(n)$ using $h_1$ in search leads to the exploration at least the same number of states as it does using $h_2$. The h(n) function properties in A* (cntd.) Admissibility of the heuristic function $h(n)$ is an interesting property, which can frequently be proved for functions coarsely approximating $h^*(n)$, but not always can be proved for painstakingly elaborated function, such as using numerical learning from a series of examples (which is one method of constructing heuristic functions, which we will look at later). An even stronger property of a heuristic evaluation function $h(n)$ is its **consistency**, also called the **monotone restriction**, or simply the triangle property: $$\forall_{n_i \rightarrow n_j} h(n_i) - h(n_j) \leq c(n_i, n_j)$$ It can be proved that for a function $h$ satisfying the monotone restriction the A* algorithm always already knows the best path to any state (graph node) that is chooses for expansion. In practice this makes it possible to simplify the search algorithm implementation, if we know that the evaluation function is consistent. A* algorithm complexity For most practical problems the number of nodes of the state space grows exponentially with the length of the solution path. Certainly, an efficient heuristic could decrease the computational complexity of the algorithm. A good question is: when could we count on such a reduction? It can be proved, that for this to happen, i.e. for the A* algorithm to run in polynomial time, the estimate error of the heuristic evaluation function should not exceed the logarithm of the actual solution length: $$|h(n) - h^*(n)| \leq O(\log h^*(n))$$ In most practical cases one cannot count on finding such good heuristics, so the A* algorithm should be considered to be exponential. However, most often this bad time performance is not even the biggest problem with A*. Just as with most other graph searching algorithms, it stores all the discovered states in memory, and usually fills up the available computer memory a long time before running out of its time limit. Memory-considerate variants of A* There are variants of the A* algorithm which cope with the memory problem. The IDA* (Iterative-Deepening A*) algorithm sets a limit on the $f$ value to which the algorithm is allowed to proceed. After that the limit is extended, but the explored nodes are deleted from memory. The RBFS (Recursive Best-First Search) algorithm is more like the recursive version of the BT algorithm. It explores the search graph recursively, always keeping in mind the estimated cost of the second-best option (at all levels of recursion). When the currently explored path estimate exceeds the memorized alternative, the algorithm backtracks. And when it does backtrack, it loses all memory of all the explored part of the space (but keeps the estimate of that path in case it is later necessary to also backtrack from the original alternative). The SMA* (Simplified Memory-Bounded A*) proceeds just like A*, but only up to the limit of the currently available memory. After that, the algorithm continues, but deleting the least-promising node to make space for each newly encountered state. However, it stores in the parent of each deleted node its heuristic estimate, so in case all preserved nodes get their estimates higher, the algorithm may come back, and re-generate the deleted node. Bidirectional search It is possible to perform the search of a state space both in the **forward** and **backward** direction. For the backward search, instead of determining the successors of the current state through all the operators applicable in this state, one derives all its predecessors such, that one of the operators is applicable in the predecessor state, and gives the current state when applied. The backward search may, or may not, be better in a specific case that the forward search. Furthermore, in some cases very good results can be achieved by the **bidirectional** search. But beware: A* labyrinth search applet Search methods — graph searching — the A* algorithm Short review 1. What is the difference between A* and best-first search algorithms? How does this difference affect the search process? 2. What are admissible heuristics for the A* algorithm? What is their practical significance? 3. What is the purpose of backpointers in graph searching algorithms? How do they improve the process of searching for the solution? 4. The heuristic search algorithm A* with an admissible evaluation function h guarantees finding an optimal solution, whenever one exists. Consider the following modifications of the f function, and answer whether they preserve the optimality property of the A* algorithm. Justify your answer. (a) introduction of an upper bound on the value of the h(n) function (b) introduction of a lower bound on the value of the g(n) function Constructing useful heuristics How in general can one go about constructing a useful heuristic function, without a sufficient knowledge of the problem domain to design it from first principles? Experiment, experiment, experiment! Example: heuristics for the 8-puzzle **Heuristic 1:** count elements in wrong places, the function \( h_1(n) = W(n) \) **Heuristic 2:** for all the elements in a wrong place, compute and add up their distances from their proper place. The number thus derived will certainly be less than the number of moves of any complete solution (so is a lower bound of the solution). Call it the function \( h_2(n) = P(n) \) **Heuristic 3:** \( h_3(n) = P(n) + 3 \times S(n) \) where the function \( S(n) \) is computed for the elements on the perimeter of the puzzle taking 0 for those elements which have their correct right neighbor (clockwise), and taking 2 for each element which have some other element as their right neighbor. The element in the middle scores 1, if it is present. In general, neither \( S(n) \) nor \( h_3(n) \) are lower bounds of the solution length. However, the \( h_3(n) \) function is one of the best well-known evaluation functions for the 8-puzzle, resulting in a very focused and efficient search strategy. On the other hand, the \( h(n) \equiv 0 \) function is a perfect lower bound solution estimation, satisfying the requirements of the A* algorithm, and always finding the optimal solution. This illustrates the fact, that *technically correct* is not necessarily *heuristically efficient*. Search methods — constructing heuristic functions The heuristic function quality vs. the cost of A* search <table> <thead> <tr> <th>d</th> <th>IDS</th> <th>A*(h₁)</th> <th>A*(h₂)</th> <th>IDS</th> <th>A*(h₁)</th> <th>A*(h₂)</th> </tr> </thead> <tbody> <tr> <td>2</td> <td>10</td> <td>6</td> <td>6</td> <td>2.45</td> <td>1.79</td> <td>1.79</td> </tr> <tr> <td>4</td> <td>112</td> <td>13</td> <td>12</td> <td>2.87</td> <td>1.48</td> <td>1.45</td> </tr> <tr> <td>6</td> <td>680</td> <td>20</td> <td>18</td> <td>2.73</td> <td>1.34</td> <td>1.30</td> </tr> <tr> <td>8</td> <td>6384</td> <td>39</td> <td>25</td> <td>2.80</td> <td>1.33</td> <td>1.24</td> </tr> <tr> <td>10</td> <td>47127</td> <td>93</td> <td>39</td> <td>2.79</td> <td>1.38</td> <td>1.22</td> </tr> <tr> <td>12</td> <td>364404</td> <td>227</td> <td>73</td> <td>2.78</td> <td>1.42</td> <td>1.24</td> </tr> <tr> <td>14</td> <td>3473941</td> <td>539</td> <td>113</td> <td>2.83</td> <td>1.44</td> <td>1.23</td> </tr> <tr> <td>16</td> <td></td> <td>1301</td> <td>211</td> <td>1.45</td> <td>1.25</td> <td></td> </tr> <tr> <td>18</td> <td></td> <td>3056</td> <td>363</td> <td>1.46</td> <td>1.26</td> <td></td> </tr> <tr> <td>20</td> <td></td> <td>7276</td> <td>676</td> <td>1.47</td> <td>1.27</td> <td></td> </tr> <tr> <td>22</td> <td></td> <td>18094</td> <td>1219</td> <td>1.48</td> <td>1.28</td> <td></td> </tr> <tr> <td>24</td> <td></td> <td>39135</td> <td>1641</td> <td>1.48</td> <td>1.26</td> <td></td> </tr> </tbody> </table> Figure 4.8 Comparison of the search costs and effective branching factors for the ITERATIVE-DEEPENING-SEARCH and A* algorithms with $h_1$, $h_2$. Data are averaged over 100 instances of the 8-puzzle, for various solution lengths. A heuristic search of the 8-puzzle search tree Constructing heuristic functions (cntd.) One of the general approaches to constructing heuristic functions is the following. Consider a simplified problem, by giving up on some requirement(s), to make finding a solution easy. For each state generated during the search for the original problem, a simplified problem is solved (e.g., using a breadth-first search). The cost of the optimal solution for the simplified problem can be taken as an estimation (lower bound) of the solution cost for the original problem. For example, if the state space is defined with $n$ parameters, so the states are the elements of the $n$-dimensional space, then one of the parameters can be eliminated, effectively mapping the states to $(n - 1)$ dimensions. If there are a few different ways, that this simplification can be achieved, and we cannot choose between them (e.g., which state variable to drop), then we can use their combination for the evaluation function: $h(n) = \max_k(h_1(n), \ldots, h_k(n))$ Let us note, that in the case of the 8-puzzle heuristics, if one allowed a teleportation of the elements to their proper place in one move, it would be an example of such approach, and give the evaluation function $h_1(n)$. Further, the agreement to move elements by single field, but regardless of other elements possibly in the way, would give the function $h_2(n)$. Another approach to developing a heuristic function is to work it out statistically. One needs first to determine the state attributes, which are significant for the evaluation of the distance to the solution. Having done that, a heuristic function which is a linear combination of these attributes, with some unknown coefficients, can be learned. This is done by running some experiments to determine some solution distances, using a full search, or another heuristic function. The derived optimal solution distances can be used to construct a set of equations, and in effect to determine approximate values for the unknown coefficients. 1. Name and briefly describe the methods you know for creating heuristic evaluation functions. Searching in two-person games Games are fascinating and often intellectually challenging entertainment. No wonder they have been the object of interest of artificial intelligence. State space search methods cannot be directly applied to games because the opponent’s moves, which are not known, must be considered. The “solution” must be a scheme considering all possible reactions of the opponent. Additionally, in some games the full state information is not available to either player. Types of games: <table> <thead> <tr> <th>perfect information</th> <th>deterministic</th> <th>chance</th> </tr> </thead> <tbody> <tr> <td>chess, checkers, go, othello</td> <td>backgammon, monopoly</td> <td></td> </tr> <tr> <td>imperfect information</td> <td>battleships, blind tictactoe</td> <td>bridge, poker, scrabble</td> </tr> </tbody> </table> Two-person game tree MAX (X) MIN (O) MAX (X) MIN (O) TERMINAL Utility Search methods — search algorithms for games 56 The minimax procedure A complete strategy for a deterministic perfect information game can be computed using the following **minimax** procedure. It computes the value of the starting node by propagating the final utility values up the game tree: 1. the levels of the tree correspond to the players’ moves: MAX’s and MIN’s; assume the first move is MAX’s, 2. assign the MAX’s win value to the terminal states in the leaves (negative, if they actually represent a loss to MAX) 3. tree nodes are successfully assigned the values: the maximum of the branches below if the current node corresponds to MAX, and the minimum of the branches below if the node corresponds to MIN, 4. the top tree branch with the highest value indicates the best move for MAX. ![Game Tree Diagram] Search methods — search algorithms for games 57 Resource limiting — using heuristics The minimax procedure defines an optimal strategy for the player, assuming the opponent plays optimally. But only, if it can be fully computed. For a real game tree this might be a problem. Eg., for chess $b \approx 35$, $m \approx 100$ for a reasonable game, and a complete game tree might have about $35^{100} \approx 10^{155}$ nodes. (The number of atoms in the known part of the Universe is estimated at $10^{80}$.) To solve this problem, a heuristic function estimating a position value can be used, like in standard state space search, to determine the next move without having an explicit representation of the full search space. In the case of a two-person game this facilitates applying the minimax procedure to a partial game tree, limited to a few moves. For chess, such heuristic function can compute the material value of the figures on the board, eg. 1 for a pawn, 3 for a rook or a bishop, 5 for a knight, and 9 for the queen. Additionally, position value can be considered, such as „favorable pawn arrangement”, or a higher value of the rook in the end-game (higher yet for two rooks). Special situations in heuristic-based search Limiting the depth search sometimes leads to specific issues, which require special treatment. One of them is the concept of quiescence search. In some cases the heuristic evaluation function of some states may be favorable for one of the players, but the next few moves — which extend beyond the minimax search limit — inevitably lead to serious shifts, like exchanging some pieces in chess. It would be useful to detect such situations, and extend the search in the corresponding part of the game tree to reach a more stable configuration, or so-called quiescent states. Another issue is the horizon effect. It occurs when an inevitable loss for one of the players approaches, but she can postpone its onset by making insignificant moves. Minimax search — cutting off the search What practical effects can be obtained with the heuristic search limited to a few steps? Eg., for chess, assuming $10^4$ nodes per second and 100 seconds for a move, $10^6 \approx 35^4$ positions can be explored, which amounts to 4 moves. Unfortunately, for chess this corresponds to only the most elementary play. Additional techniques for increasing the search efficiency are needed. It turns out it is easy to make additional savings in the minimax. The most common approach is called the alpha-beta cuts. Answer: step 10 PROCEDURE MINIMAX-ALPHA-BETA(n,alpha,beta,depth) BEGIN IF depth==MAXDEPTH THEN RETURN(h(n)) choices := Descendant_list(n) WHILE (NOT Empty(choices)) AND (alpha < beta) DO ;; abandon exploring subsequent descendant of node n - means a cut BEGIN n1 := First(choices) choices := Rest(choices) w1 := MINIMAX-ALPHA-BETA(n1,alpha,beta,depth+1) IF EVEN(depth) THEN ; for MAX's nodes IF w1 > alpha THEN alpha := w1 IF ODD(depth) THEN ; for MIN's nodes IF w1 < beta THEN beta := w1 END IF EVEN(depth) THEN RETURN(alpha) ; MAX's node ELSE RETURN(beta) ; MIN's node END ⇒ in the first call we use \( \alpha = -\infty, \beta = +\infty \) \(\alpha-\beta\) cuts — the optimal case The optimal case of the minimax search with the alpha-beta cuts is when at each tree level the nodes are examined starting from the most favorite, for the given player. In such case only one “series” of nodes are evaluated in each subtree, and a cut occurs on each return up the tree. In the above diagram the savings is 16 nodes. Out of 27 nodes at the lowest level of the tree only 11 must be evaluated. Source: Patrick Henry Winston, Artificial Intelligence, 3rd ed. (note an error: the nodes 18, 19, 21, and 22 could also be cut off). Application of minimax and heuristics to checkers Minimax — a multi-player generalization The minimax algorithm can be generalized to a multi-player case. In this case, a vector evaluation function must be employed, which evaluates the position from the point of view of each player. Each player maximizes her element of the vector, and the value propagation proceeds like in two-player case. There are other factors that have to be considered in multi-player games, such as alliances. Sometimes it is advantageous for players to make alliances against other players, or even change these alliances dynamically during the game. Games with chance elements With chance elements, the set of available actions at each step is dependent on some random variable, such as throwing the dice. The analysis is more complicated and requires considering all the options, and computing the expected values of the distributions of the random variables. 1. For the following two-person game search tree, write a precise sequence of the evaluation function values computed by the minimax algorithm with alpha/beta cuts (order left to right). Constraint satisfaction problems The **Constrained Satisfaction Problems** (CSP) are a special group of state space search problems defined as follows: - a finite set of variables $X = \{x_1, x_2, \ldots, x_n\}$ - for each variable $x_i$, a finite set of its possible values, called its **domain** - a finite set of **constraints** for the combination of values of the variables, eg. if $x_1 = 5$, then $x_2$ must be even, and the combination $(x_1 = 5, x_2 = 8, x_3 = 11)$ is disallowed A solution of a CSP problem is any combination of variable values satisfying all the constraints. Let us note, that the CSP problems are really a special case of a general state space search problems if we treat the set of constraints as a goal specification, and assigning values to variables as state transition operators. Therefore, all algorithms introduced earlier can be applied to these problems. Constraint satisfaction problems (cntd.) Examples of CSP problems are: graph or map coloring, the 8-queen problem, the SAT problem (assigning 0 or 1 values to variables in a logical formula to satisfy the formula), cryptoarithmetic, VLSI design, the node labeling problem (for object recognition in images after edge detection), task queueing, planning, and many others. Many of them are NP-hard problems. A CSP problem may have a solution or not, or there may exist many solutions. The goal may be to find one solution, all of the solutions, or the best solution according to some cost function. The constraints in a CSP problem may be assumed to be binary, ie. constraining pairs of variables. If there are other constraints in a CSP problem, then $n$-ary constraints (for $n > 2$) can be converted to equivalent binary constraints, and unary constraints can be built into their respective variables’ domains and dropped. Local constraint satisfaction Let’s consider the map coloring problem. We have to assign colors to areas in a given map from the sets of allowed colors, possibly different for different areas, so that adjacent areas have different colors. Before we start searching the space of possible value assignments to variables, we can conduct some local constraint satisfaction analyzes. Let’s consider the constraint graph of a CSP problem, whose nodes correspond to the variables, and edges to the (binary) constraints of the original problem. We consider an edge in this graph as a pair of complementary directed edges, and define a directed edge $x_i \rightarrow x_j$ of the graph to be arc consistent iff $\forall x \in D_i \exists y \in D_j$ such that the pair $(x, y)$ satisfies all the constraints existing for the edge. An inconsistent arc can be brought into consistency by removing specific values from the domains of some variables (specifically, those $x \in D_i$ values for which there does not exist a $y \in D_j$ value satisfying some specific constraint). This works to reduce and simplify the original problem. Arc consistency Let’s consider the following example map coloring problem: \[ D_1 = \{ R, G, B \}, \] \[ D_2 = \{ R, G \}, \] \[ D_3 = \{ R \}, \] \[ C = \{ x_1 \neq x_2, x_2 \neq x_3, x_1 \neq x_3 \}. \] The arc \((x_1—x_2)\) is arc consistent, since both \(\forall x \in D_1 \exists y \in D_2 \ x \neq y\) and \(\forall y \in D_2 \exists x \in D_1 \ x \neq y\) hold. The fact that arc consistency holds is a mixed blessing. It means that the constraint satisfaction checking of a specific arc in the graph does not contribute to solving the problem. However, a full analysis of the whole CSP constraint graph can sometimes give quite useful results. We again consider the map coloring problem: \( D_1 = \{R, G, B\}, \ D_2 = \{R, G\}, \ D_3 = \{R\}, \ C = \{x_1 \neq x_2, x_2 \neq x_3, x_1 \neq x_3\} \). Analyzing the first constraint (\(x_1 \neq x_2\)) gives nothing because, as previously noted, this edge is arc consistent. (For each value from \(D_1\) there is a value in \(D_2\) which satisfies the constraint, and the other way around.) However, analyzing the second constraint (\(x_2 \neq x_3\)) gives some useful results. Even though for \(x_3 = R\) there exists corresponding values for \(x_2\), for \(x_2 = R\) there is not a value for \(x_3\) satisfying that constraint. So the value \(R\) can be removed from the domain of \(x_2\). An example: map coloring (cntd.) A similar analysis for the constraint \((x_1 \neq x_3)\) permits to strike from the domain of \(x_1\) the value R: \[ x_1 \in \{\text{R,G,B}\} = \{\text{G,B}\} \] \[ x_3 \in \{\text{R}\} \] \[ x_2 \in \{\text{R,G}\} = \{\text{G}\} \] Analyzing all the constraints ended with a partial reduction of the variables’ domain. The problem has been simplified (there are fewer possible value assignments to variables), but there still exists more than one potential solution. But it is easy to observe that the arc consistency checking could, and should, be continued. Constraint propagation Since the arc consistency checking results in the reduction of the domains of some variables, it makes sense to repeat the process for the constraint graph edges which were originally consistent, or which have been made consistent. This leads to the **constraint propagation**, which means repeating consistency checking as long as values continue to be removed from variables’ domains. The constraint propagation in the map coloring example causes the edge \((x_1, x_2)\) — originally consistent — to remove the value \(G\) from the domain \(D_1\): ![Diagram showing constraint propagation] Finally, all the variables have singleton domains, and, furthermore, all the values satisfy all the constraints. Thus the constraint propagation in this case helped solve the problem and determine the unique solution. In general, consistency checking and constraint propagation lead merely to a simplification, and not necessarily to a complete solution, of a problem. Algorithms for computing arc consistency The easiest approach to compute the arc consistency is to take each constraint, in turn, and testing the logical conditions of the constraints. But since this may have to be repeated due to propagation, even for a single edge, there are a lot of computations. Some savings are possible. It can be observed, that after a reduction of some domain $D_i$ the propagation can give new results only by checking the edges of the form $(D_k, D_i)$, so just these needs to be checked. What’s more, with any reduction in $D_k$ there is no need to check the edge $(D_i, D_k)$, since the elements removed during this reduction from $D_k$ were not necessary for any constraint satisfaction for any of the elements of $D_i$. The algorithm computing the constraint propagation this way is called AC-3. When an arc’s consistency is checked again, the same conditions are evaluated for the same pairs of values. Memorizing these verified value pairs (in an proper data structure) could help refrain from recomputing them during subsequent propagations. This is accomplished by yet another algorithm called AC-4. Constraint propagations — the unsolved cases It is easy to notice, that in another instance of the map coloring problem presented here, all arcs are consistent. Nevertheless, the problem has no solution. In still another instance all arcs are again consistent. The problem has two solutions, and the constraint propagation does not help in determining them explicitly, not does it result in any reductions. By adding to the previous problem the constraint: \((x_1 \neq B) \lor (x_2 \neq R)\), we obtain yet another instance, in which only one solution is valid, but it still cannot be determined by constraint propagation. So computing arc consistency and constraint propagation do not by themselves guarantee determining a solution of a CSP problem. It is necessary to search. Path consistency We define for a CSP constraint graph the notion of **K-consistency**. A graph is K-consistent (for some K), if for any (K-1) variables, which among themselves have all the constraints satisfied, for any (K-1)-tuple of values of these (K-1) variables satisfying all the constraints for the (K-1) variables, in the domain of any selected K-th variable a value such, that the so-obtained K-tuple of values satisfies all the constraints for the K variables. A constraint graph is **strongly K-consistent** if it is K-consistent for any J, J<K. Note that the previously defined arc consistency is equivalent to the strong 2-consistency of a constraint graph. The strong 3-consistency of a graph is also called a **path consistency**. The significance of K-consistency is such, that if a CSP problem constraint graph with n nodes is strongly n-consistent, then the problem can be solved without searching. However, the algorithms for enforcing K-consistency are exponential, so it is seldom worthwhile to do that. An exception is a weaker version of path consistency — the **restricted path consistency**, for which there is an algorithm which is sometimes computed. Searching in the CSP problems Any of the previously discussed searching algorithms may be used for the CSP problems. However, in most really hard CSP problems, where the constraints have the nature of hard to meet, tight compromises, the most important is just the analysis of these constraints, both syntactic and semantic. On the other hand it is typically hard to come up with a useful heuristic, capable of guiding the process of searching the space of value assignments to the variables. Therefore often used is the simplest of the searching algorithms, the backtracking search (BT). In place of a good heuristic prioritizing the best choices to be at the front of the list, this algorithm may be augmented by a local constraint satisfaction checking. This reduces the number of choices for the subsequent steps. In the extreme case, when the domain of some variable got reduced to an empty set, the algorithm would immediately backtrack to the alternative values in earlier assignments. Example: the 4 queen problem Let’s now consider the application of the BT (backtracking) algorithm to the 4 queen problem. We formulate the problem to assign the row positions to the 4 queens belonging to the different columns of the $4 \times 4$ chessboard. Note the BT algorithm explores the search tree but does not store it in memory, just the current path. The algorithm checks the constraints after placing all the queens on the board. It will surely solve the problem, but makes many unnecessary steps, which could be eliminated. For example, all the terminal configurations are invalid due to the placement of the second queen. This can be seen at depth level 2 already. Example: the 4 queen problem (cntd.) An obvious improvement to the algorithm is then to test the constraints on all variables as soon as they have been assigned values. Should any constraint be found to be violated, the value assignment most recently made would immediately be dropped, and the algorithm would backtrack. This algorithm will be called early checking (BT-EC). It is obviously advantageous to the BT algorithm, since the tested constraints would have to be later checked anyway. Example: the 4 queen problem (cntd.) Combining the backtracking search with just the minimal form of the local constraint satisfaction checking is called the *forward checking* (BT-FC) algorithm. All the constraints for any variable assigned a value are checked, and only those. In most cases this algorithm is advantageous to BT-EC, and certainly to BT. Example: the 4 queen problem (cntd.) It is possible to apply the full arc consistency checking, with propagation. The algorithm doing that is sometimes called the \textit{look-ahead} (BT-LA) algorithm. It may significantly reduce the size of the explored search space, as it does in the 4-queens example here. However, the cost performing those checks is significant, and the BT-LA may not always be advantageous to the BT-FC algorithm. Dependency-directed backtracking In searching the CSP tree we may encounter a failure, causing the BT algorithm to backtrack, whose cause was not the most recently selected assignment, but one of the earlier steps. In such case the algorithm will continue trying various possibilities, generating only failures, until it backtracks sufficiently, and changes the assignment of the offending variable. It is possible to detect such cases, when the set of variables involved in constraints with the current variable — the **conflict set** — does not include the most recently assigned variable. In these cases, the algorithm could backtrack, not just a single step, but all the way to the most recently assigned variable from the conflict set. Such algorithm is called **backjumping** (BJ). Simple backjumping currently has only historical value, since it solves the problem, which does not arise in practice, since the arc consistency checking starting from BT-FC eliminate those cases completely. However, backjumping is still useful with a slightly extended concept of the conflict set, defined as a set of those variables, whose assigned values caused a constraint failure of the current variable, along with the subsequently assigned variables. A version of BJ based on such definition is called **conflict-directed backjumping**, and it is capable of determining the backjumping steps where consistency checking does not help. Dynamic ordering We have noted earlier, that is is difficult to obtain good heuristics indicating good moves in searching the space of most CSP problems. There do exist, however, other techniques augmenting this search, based on dynamic ordering, both of variables to select those which should first receive assignments, and of values, which should be tried first. The most constrained variable heuristic (or MRV, for Minimum Remaining Values), suggests to first select those variables with the smallest domains. Such choice gives the best chance of encountering inconsistencies, and taking advantage of the resulting reductions. This heuristic also works well within the BT-FC algorithm. Another heuristic which may be useful in selecting a variable is the degree heuristic, suggesting the variable occurring in the highest number of constraints with unassigned variables. Once a variable to assign is chosen, the least constraining value heuristic may be used which prefers to choose those values, which exclude the least values of other variables. Local search for CSP Another approach which works well with some CSP problems is based on local search. After more or less random choice of an initial value assignment for all variables, an incremental repair is attempted. Greedy hill-climbing search may be used, which does not explore the search space systematically, unlike the BT family of algorithms. Often successful in such search for CSP problems is the min-conflict heuristic which works by randomly selecting a variable violating some constraint, and selecting another value for it, so that it would minimize conflicts (number of failed constraints) with other variables. Some CSP problems can be solved with surprising efficiency using this approach. The key element to success is the randomness, which helps to escape the local maxima, and other traps, and to select the right variable to repair, or to skip an unfortunate variable choice, for which the right value would better be assigned later. 1. Consider the CSP problem with four variables: $A, B, C, D$, with domains: $\{1, 2, 3\}$ for each, and the set of constraints given below. Draw the constraint graph for the problem, and then try to solve it using constraint propagation (arc consistency). Show each step of the solution (no picture). Show the graph after the termination of constraint propagation. How many possible CSP problem solutions does it represent? Write down one of them. The constraint set: $C = \{C \neq D, B > D, B > C\}$
{"Source-Url": "http://sequoia.ict.pwr.wroc.pl/~witold/ai/aie_search_s.pdf", "len_cl100k_base": 13923, "olmocr-version": "0.1.53", "pdf-total-pages": 87, "total-fallback-pages": 0, "total-input-tokens": 138715, "total-output-tokens": 17014, "length": "2e13", "weborganizer": {"__label__adult": 0.000576019287109375, "__label__art_design": 0.0007562637329101562, "__label__crime_law": 0.0009074211120605468, "__label__education_jobs": 0.004730224609375, "__label__entertainment": 0.00024056434631347656, "__label__fashion_beauty": 0.0003650188446044922, "__label__finance_business": 0.0006136894226074219, "__label__food_dining": 0.0006494522094726562, "__label__games": 0.00919342041015625, "__label__hardware": 0.001873016357421875, "__label__health": 0.0007309913635253906, "__label__history": 0.000911235809326172, "__label__home_hobbies": 0.0002894401550292969, "__label__industrial": 0.0010385513305664062, "__label__literature": 0.0009975433349609375, "__label__politics": 0.0005426406860351562, "__label__religion": 0.0007443428039550781, "__label__science_tech": 0.215087890625, "__label__social_life": 0.00020301342010498047, "__label__software": 0.01247406005859375, "__label__software_dev": 0.7451171875, "__label__sports_fitness": 0.00075531005859375, "__label__transportation": 0.0009608268737792968, "__label__travel": 0.000316619873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61678, 0.01574]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61678, 0.56217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61678, 0.91378]], "google_gemma-3-12b-it_contains_pii": [[0, 1399, false], [1399, 1888, null], [1888, 2850, null], [2850, 3515, null], [3515, 4692, null], [4692, 5492, null], [5492, 6058, null], [6058, 6446, null], [6446, 6624, null], [6624, 6624, null], [6624, 7065, null], [7065, 8084, null], [8084, 9122, null], [9122, 10558, null], [10558, 11480, null], [11480, 12397, null], [12397, 13253, null], [13253, 13765, null], [13765, 14383, null], [14383, 14383, null], [14383, 14937, null], [14937, 15153, null], [15153, 16129, null], [16129, 16329, null], [16329, 17233, null], [17233, 17389, null], [17389, 18385, null], [18385, 19317, null], [19317, 20360, null], [20360, 21136, null], [21136, 22151, null], [22151, 23101, null], [23101, 23906, null], [23906, 24712, null], [24712, 25033, null], [25033, 25033, null], [25033, 25956, null], [25956, 27207, null], [27207, 28106, null], [28106, 29426, null], [29426, 30400, null], [30400, 31386, null], [31386, 32697, null], [32697, 33307, null], [33307, 33387, null], [33387, 34195, null], [34195, 34427, null], [34427, 35800, null], [35800, 36840, null], [36840, 36887, null], [36887, 38255, null], [38255, 38896, null], [38896, 38991, null], [38991, 38991, null], [38991, 39755, null], [39755, 39880, null], [39880, 40704, null], [40704, 41847, null], [41847, 42636, null], [42636, 43188, null], [43188, 43316, null], [43316, 44029, null], [44029, 44611, null], [44611, 44661, null], [44661, 45241, null], [45241, 45553, null], [45553, 45740, null], [45740, 45740, null], [45740, 46636, null], [46636, 47564, null], [47564, 48688, null], [48688, 49344, null], [49344, 50040, null], [50040, 50641, null], [50641, 51630, null], [51630, 52769, null], [52769, 53551, null], [53551, 54734, null], [54734, 55730, null], [55730, 56411, null], [56411, 56905, null], [56905, 57261, null], [57261, 57699, null], [57699, 59132, null], [59132, 60187, null], [60187, 61150, null], [61150, 61678, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1399, true], [1399, 1888, null], [1888, 2850, null], [2850, 3515, null], [3515, 4692, null], [4692, 5492, null], [5492, 6058, null], [6058, 6446, null], [6446, 6624, null], [6624, 6624, null], [6624, 7065, null], [7065, 8084, null], [8084, 9122, null], [9122, 10558, null], [10558, 11480, null], [11480, 12397, null], [12397, 13253, null], [13253, 13765, null], [13765, 14383, null], [14383, 14383, null], [14383, 14937, null], [14937, 15153, null], [15153, 16129, null], [16129, 16329, null], [16329, 17233, null], [17233, 17389, null], [17389, 18385, null], [18385, 19317, null], [19317, 20360, null], [20360, 21136, null], [21136, 22151, null], [22151, 23101, null], [23101, 23906, null], [23906, 24712, null], [24712, 25033, null], [25033, 25033, null], [25033, 25956, null], [25956, 27207, null], [27207, 28106, null], [28106, 29426, null], [29426, 30400, null], [30400, 31386, null], [31386, 32697, null], [32697, 33307, null], [33307, 33387, null], [33387, 34195, null], [34195, 34427, null], [34427, 35800, null], [35800, 36840, null], [36840, 36887, null], [36887, 38255, null], [38255, 38896, null], [38896, 38991, null], [38991, 38991, null], [38991, 39755, null], [39755, 39880, null], [39880, 40704, null], [40704, 41847, null], [41847, 42636, null], [42636, 43188, null], [43188, 43316, null], [43316, 44029, null], [44029, 44611, null], [44611, 44661, null], [44661, 45241, null], [45241, 45553, null], [45553, 45740, null], [45740, 45740, null], [45740, 46636, null], [46636, 47564, null], [47564, 48688, null], [48688, 49344, null], [49344, 50040, null], [50040, 50641, null], [50641, 51630, null], [51630, 52769, null], [52769, 53551, null], [53551, 54734, null], [54734, 55730, null], [55730, 56411, null], [56411, 56905, null], [56905, 57261, null], [57261, 57699, null], [57699, 59132, null], [59132, 60187, null], [60187, 61150, null], [61150, 61678, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61678, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61678, null]], "pdf_page_numbers": [[0, 1399, 1], [1399, 1888, 2], [1888, 2850, 3], [2850, 3515, 4], [3515, 4692, 5], [4692, 5492, 6], [5492, 6058, 7], [6058, 6446, 8], [6446, 6624, 9], [6624, 6624, 10], [6624, 7065, 11], [7065, 8084, 12], [8084, 9122, 13], [9122, 10558, 14], [10558, 11480, 15], [11480, 12397, 16], [12397, 13253, 17], [13253, 13765, 18], [13765, 14383, 19], [14383, 14383, 20], [14383, 14937, 21], [14937, 15153, 22], [15153, 16129, 23], [16129, 16329, 24], [16329, 17233, 25], [17233, 17389, 26], [17389, 18385, 27], [18385, 19317, 28], [19317, 20360, 29], [20360, 21136, 30], [21136, 22151, 31], [22151, 23101, 32], [23101, 23906, 33], [23906, 24712, 34], [24712, 25033, 35], [25033, 25033, 36], [25033, 25956, 37], [25956, 27207, 38], [27207, 28106, 39], [28106, 29426, 40], [29426, 30400, 41], [30400, 31386, 42], [31386, 32697, 43], [32697, 33307, 44], [33307, 33387, 45], [33387, 34195, 46], [34195, 34427, 47], [34427, 35800, 48], [35800, 36840, 49], [36840, 36887, 50], [36887, 38255, 51], [38255, 38896, 52], [38896, 38991, 53], [38991, 38991, 54], [38991, 39755, 55], [39755, 39880, 56], [39880, 40704, 57], [40704, 41847, 58], [41847, 42636, 59], [42636, 43188, 60], [43188, 43316, 61], [43316, 44029, 62], [44029, 44611, 63], [44611, 44661, 64], [44661, 45241, 65], [45241, 45553, 66], [45553, 45740, 67], [45740, 45740, 68], [45740, 46636, 69], [46636, 47564, 70], [47564, 48688, 71], [48688, 49344, 72], [49344, 50040, 73], [50040, 50641, 74], [50641, 51630, 75], [51630, 52769, 76], [52769, 53551, 77], [53551, 54734, 78], [54734, 55730, 79], [55730, 56411, 80], [56411, 56905, 81], [56905, 57261, 82], [57261, 57699, 83], [57699, 59132, 84], [59132, 60187, 85], [60187, 61150, 86], [61150, 61678, 87]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61678, 0.04296]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
b9d7b0b1b74e27cb5a9b0e8f8fe8b0da3bea7443
DLDMOD Software Manual Campbell Scientific, Inc. INTRODUCTION TO THE DLDMOD MANUAL This manual is supplied with the preliminary release of DLDMOD. There are two parts to this manual. The first part is intended to provide an overview of DLDMOD. The second part describes the procedures required to develop a working DLDMOD application. All of the available commands are listed with several examples of how to use them. As with most computer languages, much can be learned by studying programs previously written by software developers familiar with the language. For this reason, some sample programs have been included on the floppy disk supplied to you. Throughout this manual, "developer" is used to describe the person that will write the DLDMOD application. This is most likely the person reading this manual. The word "user" describes the person running the application created by the "developer". It is important that you read the agreement included with the package you received and that you comply with the provisions stated. As a developer, you are responsible for supporting your DLDMOD applications and the datalogger programs created with your applications. Campbell Scientific, Inc. cannot assume any responsibility for the support of applications you create. You should not distribute your application (or allow your application to be distributed) to users that you will not support. Comments and suggestions are welcome and should be relayed to the person that supplied your copy of DLDMOD. LIMITED WARRANTY Campbell Scientific, Inc. warrants that the magnetic diskette on which the accompanying computer software is recorded and the documentation provided with it are free from physical defects in materials and workmanship under normal use. Campbell Scientific, Inc. warrants the computer software itself will perform substantially in accordance with the specifications set forth in the Operator's Manual published by Campbell Scientific, Inc. Campbell Scientific, Inc. warrants the software is compatible with IBM PC/XT/AT and PS/2 microcomputers and 100% compatible computers only. Campbell Scientific, Inc. is not responsible for incompatibility of this software running under any operating system other than those specified in accompanying data sheets or operator's manuals. The above warranties are made for ninety (90) days from the date of original shipment. Campbell Scientific, Inc. will replace any magnetic diskette or documentation which proves defective in materials or workmanship without charge. Campbell Scientific, Inc. will either replace or correct any software that does not perform substantially according to the specifications set forth in the Operator's Manual with a corrected copy of the software or corrective code. In the case of a significant error in the documentation, Campbell Scientific, Inc. will correct errors in the documentation without charge by providing addenda or substitute pages. If Campbell Scientific, Inc. is unable to replace defective documentation or a defective diskette, or if Campbell Scientific, Inc. is unable to provide corrected software or corrected documentation within a reasonable time, Campbell Scientific, Inc. will either replace the software with a functionally similar program or refund the purchase price paid for the software. Campbell Scientific, Inc. does not warrant that the software will meet licensee's requirements or that the software or documentation are error free or that the operation of the software will be uninterrupted. The warranty does not cover any diskette or documentation which has been damaged or abused. The software warranty does not cover any software which has been altered or changed in any way by anyone other than Campbell Scientific, Inc. Campbell Scientific, Inc. is not responsible for problems caused by computer hardware, computer operating systems or the use of Campbell Scientific, Inc.'s software with non-Campbell Scientific, Inc. software. **ALL WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED AND EXCLUDED. CAMPBELL SCIENTIFIC, INC. SHALL NOT IN ANY CASE BE LIABLE FOR SPECIAL, INCIDENTAL, CONSEQUENTIAL, INDIRECT, OR OTHER SIMILAR DAMAGES EVEN IF CAMPBELL SCIENTIFIC, INC. HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES.** Campbell Scientific, Inc. is not responsible for any costs incurred as a result of lost profits or revenue, loss of use of the software, loss of data, cost of re-creating lost data, the cost of any substitute program, claims by any party other than the licensee, or for other similar costs. **LICENSEE'S SOLE AND EXCLUSIVE REMEDY IS SET FORTH IN THIS LIMITED WARRANTY. CAMPBELL SCIENTIFIC, INC.'S AGGREGATE LIABILITY ARISING FROM OR RELATING TO THIS AGREEMENT OR THE SOFTWARE OR DOCUMENTATION (REGARDLESS OF THE FORM OF ACTION - E.G. CONTRACT, TORT, COMPUTER MALPRACTICE, FRAUD AND/OR OTHERWISE) IS LIMITED TO THE PURCHASE PRICE PAID BY THE LICENSEE.** **LICENSE FOR USE** This software is protected by both United States copyright law and international copyright treaty provisions. You may copy it onto a computer to be used and you may make archival copies of the software for the sole purpose of backing-up Campbell Scientific, Inc. software and protecting your investment from loss. All copyright notices and labeling must be left intact. This software may be used by any number of people, and may be freely moved from one computer location to another, so long as there is no possibility of it being used at one location while it's being used at another. The software, under the terms of this license, cannot be used by two different people in two different places at the same time. **RELATIONSHIP** Campbell Scientific, Inc. hereby grants license to use DLDMOD in accordance with license statement above. No ownership in Campbell Scientific, Inc. patents, copyright, trade secrets, trademarks, or trade names is transferred by this Agreement. Developer may create as many different applications as desired and freely distribute them. Campbell Scientific, Inc. expects no royalties or any other compensation outside of the DLDMOD purchase price. Developer is responsible for supporting DLDMOD applications created by the developer. **RESPONSIBILITIES OF DEVELOPER** The developer agrees: To provide a competent programmer familiar with Campbell Scientific, Inc. datalogger programming to write the DLDMOD applications. Not to sell or distribute "COMPILE.EXE" or "DLDMOD.EXE" in any form. Not to freely distribute any other Campbell Scientific, Inc. Software (i.e. PC208) in any form. Applications developed with DLDMOD will be solely for the support of Campbell Scientific, Inc. dataloggers. No attempt will be made to support non-Campbell Scientific, Inc. dataloggers with DLDMOD applications. To assure that each application developed with DLDMOD clearly states the name of the person or entity that developed the application. This information should appear on the first window the user will see. WARRANTY There is no written or implied warranty provided with DLDMOD software other than as stated herein. TERMINATION Any license violation or breach of Agreement will result in immediate termination of the developer rights herein and the recovery of all DLDMOD materials supplied by Campbell Scientific, Inc. MISCELLANEOUS Notices required hereunder shall be in writing and shall be given by telegram, telex, or similar communication or by certified or registered air mail, return receipt requested. Such notice shall be deemed given in the case of telegrams or similar communication when sent and in the case of certified or registered mail on the date of receipt. This Agreement shall be governed and construed in accordance with the laws of the State of Utah, U.S.A. Any dispute resulting from this Agreement will be settled in arbitration. This Agreement sets forth the entire understanding of the parties and supersedes all prior agreements, arrangements and communications, whether oral or written pertaining to the subject matter hereof. This Agreement shall not be modified or amended except by the mutual written agreement of the parties. The failure of either party to enforce any of the provisions of this Agreement shall not be construed as a waiver of such provisions or of the right of such party thereafter to enforce each and every provision contained herein. If any term, clause, or provision contained in this Agreement is declared or held invalid by a court of competent jurisdiction, such declaration or holding shall not affect the validity of any other term, clause, or provision herein contained. Neither the rights nor the obligations arising under this Agreement are assignable or transferable. OVERVIEW OF DLDMOD USER The user has measurements to make with a datalogger and a DLDMOD application created by a developer for the user's application or a similar application. The user does not need any knowledge of datalogger programming. DLDMOD provides an easy way to set up the datalogger for the supported application. The user runs the DLDMOD application, fills in the blanks, and selects options from menus. There may also be appropriate help prompts for each blank and option. As the answers are entered, they are checked against a list of acceptable answers. The user is asked to reenter any values that fail. When this is done, a FILENAME.DLD file has been created and is ready to download to the datalogger. This manual has been written for the developer rather than the user. DEVELOPER The developer creates the DLDMOD application the user will run. The developer has knowledge of the types of measurement and control the user wants to do. While the developer may not know each user's exact datalogger configuration, the developer does understand what variations the users might have. The developer also knows how to program a datalogger. The developer could program the FILENAME.DLD file for the user directly if it was known exactly what the user needed to do. DLDMOD is a programming language designed to allow the developer to ask the user questions and thereby change or create a FILENAME.DLD file based on the user's answers. DLDMOD programming consists of two parts: getting information from the user, and creating a datalogger program for the user. As part of creating a datalogger program, wiring diagrams can be generated, input location usage shown, etc. DLDMOD does not do these things, but DLDMOD provides the tools for the developer to do them. INSTALLATION You should make a backup copy of the original DLDMOD disk. To use DLDMOD, simply copy the files from the backup copy of the original disk to the directory of your choice. DLDMOD works best from a hard disk. The following example shows the DOS commands to install the DLDMOD software in a subdirectory named C:\DLDMOD on the C: drive. It assumes the floppy containing DLDMOD is in the A: drive. ``` CD\MD DLDMOD CD DLDMOD COPY A:*.* ``` SAMPLE PROGRAMS This section will help explain the ten sample overview programs (OVERVW1.FMT through OVERVW9.FMT and DEMO.FMT) included with DLDMOD. The steps to compile and run these .FMT source files are described so you can try them. For example, to run OVERVW1.FMT, you would type the following at a DOS prompt (after changing to the sub directory where DLDMOD was installed): ``` Compile overvw1 Makeexe overvw1 overvw1 ``` Repeat these steps, with the appropriate file name, to run the other examples. While running an example, press F1 for a list of keys that can be used. Go ahead and try some of the examples now. When you run any example the second time, you will see a screen asking if you want to use the default answers or those you selected last time you ran the application. Experiment with both options. DLDMOD saves all of the variables in a file when program execution moves to the Compile section. When a DLDMOD application is run, it checks for the file containing the variables and will optionally use the values as the defaults. This is built into DLDMOD. You can select the default answers without being asked by typing a "D" after the example name. For example, to run the first example you could type: ``` overvw1 d ``` This would use the default answers. All these examples are DLDMOD programs that allow the end-user to make a measurement with one of three sensors: SENSOR A, SENSOR B, and SENSOR C. A NO SENSOR selection is also provided. The user's selection will be used to create a datalogger program named OVERVIEW.DLD. Note that the sensors, datalogger instructions, and the OVERVIEW.DLD program are examples only. All of the examples create a file named OVERVIEW.DLD, overwriting any existing files with the same name. The example datalogger instructions needed to measure each of the three sensors are as follows: <table> <thead> <tr> <th>Sensor A</th> <th>01:P1</th> </tr> </thead> <tbody> <tr> <td>01:1</td> <td></td> </tr> <tr> <td>02:2</td> <td></td> </tr> <tr> <td>03:3</td> <td></td> </tr> <tr> <td>04:4</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Sensor B</th> <th>01:P2</th> </tr> </thead> <tbody> <tr> <td>01:3</td> <td></td> </tr> <tr> <td>02:2</td> <td></td> </tr> <tr> <td>03:1</td> <td></td> </tr> </tbody> </table> <table> <thead> <tr> <th>Sensor C</th> <th>01:P3</th> </tr> </thead> <tbody> <tr> <td>01:4</td> <td></td> </tr> <tr> <td>02:5</td> <td></td> </tr> <tr> <td>03:6</td> <td></td> </tr> <tr> <td>04:7</td> <td></td> </tr> <tr> <td>05:8</td> <td></td> </tr> </tbody> </table> All DLDMOD programs have two main sections: Declarations and Compile. In the Declaration section, the developer sets up what the user will see and do. In the Compile section the developer actually makes changes to the .DLD file. When a DLDMOD program is run, it starts by displaying the first windows, as declared in the Declarations section. From this point, the user navigates through the windows with the cursor keys or the mouse, making selections (Menu options or options) and filling in prompts (variables). DLDMOD provides two ways to get information from the user: variables and menu options. Variables provide a prompt and the user types in the appropriate information. Try OVERVW1 to see an example of a variable. Menu options allow the user to press ENTER and select an option after moving the highlight cursor to the option he wants. Try OVERVW2 to see an example using options. The Declaration section describes how the windows are displayed and what options and variables are used. When the user has finished all the declared windows or when the user presses the F4 key, DLDMOD program flow moves to the Compile section. Usually at this point, the user's answers are committed and are used to actually create the FILENAME.DLD file (OVERVIEW.DLD in these examples). Prior to moving to the Compile section, it is good practice to allow the user to change answers and responses as needed. Program execution before moving to the Compile section is not sequential, but is event driven in nature. Windows are displayed and removed based on the user’s selections and key presses. The sequence of the main windows and the response to each menu option is described in the Declarations section, but within these constraints the user is free to navigate as desired. Program execution in the Compile section is sequential and it is easy to trace program flow. The following is the Compile section from the first few examples. Remember this is the code that actually changes OVERVIEW.DLD. ``` COMPILE DLDFILE=New 'overview', 'CR10' CHANGETO 10 at 1 IF (sensor = 'A') THEN INSERT 1:1,2,3,4 at 1:1 ENDIF IF (Sensor = 'B') THEN INSERT 2:3,2,1 at 1:1 ENDIF IF (Sensor = 'C') THEN INSERT 3:4,5,6,7,8 at 1:1 endif INSERT 28 at 10:1 INSERT 64 at 10:2 RENUM SAVE END ``` If the user selected SENSOR B the created OVERVIEW.DLD file would be: ``` !CR10 overview MODE 1 SCAN RATE 10 1:P2 ``` Compile and run OVERVW1 if you have not already done so. Notice how OVERVW1 ends as soon as you type a correct response? When you type enough characters to fill a variable's box, DLDMOD automatically advances to the next variable or option. If there are no more on the current window, DLDMOD will advance to the next window. If there are no more windows, DLDMOD advances to the Compile section. Run OVERVW1 again. Notice how typing ENTER accepts the default answer. Try an incorrect response to see how DLDMOD checks the answers. One disadvantage of variables is the user can always type in an incorrect response. DLDMOD will check the answers, but it can be frustrating for a user. Now compile and run OVERVW2. OVERVW1 and OVERVW2 get the same information from the user and create the same OVERVIEW.DLD file. This gives a comparison of variables and menu options. Menu options provide a simple way to let a user see all the choices and select one without typing. Notice that OVERVW2 also ends when a selection is made. This is similar to what OVERVW1 did but for a different reason. The following fragment is from the Declaration section. Notice the Clearback attribute on the main window declaration: ``` WINDOWS Main full dialogbox, clearback, frame Nothing ``` The Clearback attribute causes a window to be removed after the first menu option is selected. If it were not present, the window would remain until the user pressed ESC or F4. Compile and run OVERVW3. Notice it has no Clearback declaration so the window is not removed until you select the DONE option. The following is the window declaration from OVERVW3: ``` WINDOWS Main full dialogbox, Frame if (1=1) then ; Use If statement to combine readonly(sensor)+ ;multiple statements where ``` The other difference is the addition of a README command. This allows you to display a variable without allowing the user to move the cursor to it or edit it. Remember the last line of a window declaration is a single command (using multiple command will be discussed later) executed just before the window is displayed. This command marks the SENSOR variable as README. We can still change the variable's value under program control, but a user cannot edit it while it is README. README variables are a handy way to give users feedback when they make a selection using menu options. Any variable can be marked README, and they are entered in the TEXT section just like any other variable. The other change of interest is the DONE menu option. Looking at the Declaration shows how it works: ``` Done removewin 'Done making selections' ``` When DONE is selected, the current window is removed. Since there are no more windows DLDMOD moves through the Compile section and the program ends. Window declarations allow a single executable statement, executed just before the window is displayed. Menu options also allow a single executable statement. If you require more than one command, there are two ways to accomplish it. You could put multiple commands in a subroutine and then call the subroutine as the single statement. Subroutines are most useful if there is a need to execute the same code in multiple places. DLDMOD treats an entire IF THEN ELSE ENDIF block as one statement. You can group multiple statements into a single IF block and it will compile as a single statement. OVERVW4 is the same as OVERVW3 except it changes the display color of the README variable. It uses the IF block technique to group two statements into the window declaration as follows: ``` WINDOWS Main full dialogbox, Frame if (1=1) then ; Use If statement to combine readonly(sensor)+ ;multiple statements where ``` Notice that IF (1=1) will always be true, so the two statements are always executed. Also notice the comments at the end of the line. Everything after the semicolon is ignored. Compile and run OVERVW4. OVERVW5 introduces several new concepts. Compile and run it. OVERVW5 uses a subwindow. Subwindows are displayed only when the DISPLAY command is used. A subwindow is removed by: - Selecting an option when the Clearback attribute is set. - Pressing the ESC key. - Filling in a variable box when that variable is the last item on the window. - Execution of the Removewin command. When a subwindow is removed, the previously displayed window is restored. Notice the use of the SELECT LOGGER menu option to display the subwindow. Subwindows are useful for grouping and reducing complexity. The new information about what datalogger is being used is put to use in the first line of the Compile section: ``` DLDFILE=New 'overview', dtype ``` This differs from our previous DLDFILE instruction by using the new variable DTYPE instead of the constant type CR10 as in the previous examples: ``` DLDFILE=New 'overview', 'CR10' ``` Run OVERVW5 and select the 21X datalogger type. Type the OVERVIEW.DLD file and see the difference. When you run OVERVW5, notice it is somewhat awkward to enter in the datalogger type. Also notice once you leave the LOGGERS subwindow, you have no indication of which datalogger you selected. OVERVW6 fixes both of these problems. No new techniques are used, just menu options and READONLY variables. Compile and run OVERVW6. Even with a small DLDMOD program, it is important to give visual indication of what a user has selected. The DLDMOD License agreement requires that you indicate who created the DLDMOD application. OVERVW7 adds an introduction window to do just that. Notice the difference between windows and subwindows. Windows are displayed in sequence until they have all been displayed. Subwindows are only displayed with the DISPLAY command. Once the INTRO window has been displayed there is no way to return to it. The LOGGERS subwindow can be displayed as many times a needed. Notice how the INTRO window uses the DONE option to move to the next window. If INTRO had no variables or menu options the user would have to press RETURN to continue. Also notice that the DONE menu option is used in two different places. Menu options and variables can be used on multiple windows. Compile and run OVERVW7 to get a feel for windows and subwindows. While running OVERVW7, press F1 and look at the help screen. DLDMOD provides this as a default help screen, but you can create your own. Compile and run OVERVW8. Notice how pressing the F1 key displays a customized HELP window. The HELP attribute, as part of window declaration, makes that window become the HELP window. The following are the window declarations from OVERVW8: ``` WINOWS Intro full frame nothing Main full dialogbox, Frame if (1=1) then; Use IF statement to combine readonly(sensor)+ ; multiple statements readonly(dtype)+ ; where only one is expected rocolor(light cyan) endif SUBWINDOWS loggers 5,5,30,12 dialogbox, clearback, frame nothing myhelp 7,7,45,15 help, frame nothing ``` Two rules apply when creating your own HELP window. Only one window should have the HELP declaration. If two or more are declared, only one of them will be displayed. Also, do not display the HELP window with a DISPLAY command. This might violate the more general rule of never allowing a window to be displayed twice at the same time. Compile and run OVERVW9. OVERVW9 adds an additional subwindow to organize things a little better. Also included on the DLDMOD disk is DEMO.FMT. Compile and run this program. It is an example using many of the techniques described here. Examine the source code for explanations on what was done. Many editors allow the special line drawing characters used in the DEMO.FMT file. To add these characters, press and hold the ALT key while you type the three digit code for the character you wish to draw. Consult an ASCII table (found in many DOS manuals) for the appropriate codes. When modifying these examples or writing your own .FMT files, be sure to use an ASCII text editor. Most word processors do not store in plain ASCII, although some will optionally export to ASCII. The EDIT program that comes with DOS 5 or DOS 6 is a good ASCII editor. PROGRAMMING HINTS - It is important to observe the difference between an opening and a closing single quote when writing a FILENAME.FMT file. The opening quote is an accent mark, usually found on the upper left corner of the computer keyboard. The closing quote is an apostrophe. - DLDMOD is not case sensitive $A = a$. - Everything must be declared before you use it. For example, a variable must be declared before it can appear in a SET command. A window must be declared before it can be used in a DISPLAY command. You can rearrange the order of declarations to help accomplish this (i.e., subroutines can be after the variable declarations). - Never allow a window to be displayed twice at the same time. - The following lists the maximum number of variables, menu options, and windows that can be used: Windows 90 Menu Options 140 Variables 320 CREATING AN APPLICATION The process of getting the final questions and FILENAME.DLD file to the user requires four operations by the Developer. 1. Writing a FILENAME.BSE file - A .BSE is a FILENAME.DLD file that will be modified to fit the user's answers, if needed. This is optional since DLDMOD is also capable of generating a new FILENAME.DLD file. 2. Writing a FILENAME.FMT file - The .FMT file contains all the information the user will see and the actions to be taken depending on the answers. 3. Compiling the FILENAME.FMT file to a FILENAME.MEN file - A debugged .FMT file filled with numbers and parameters. Creating the FILENAME.MEN file involves compiling the FILENAME.FMT file. 4. Once the .MEN file is created, it is combined with DLDMOD.EXE to make a stand-alone FILENAME.EXE file to distribute to the customer. A batch file, MAKEEXE.BAT, does this. You shouldn't distribute the .MEN file, COMPIL.EXE, or DLDMOD.EXE. Only the FILENAME.EXE file and the .BSE file (if one is necessary) should be distributed. For example, if the application created was to be named SAMPLE, the steps would be as follows: 1. Use EDLOG to create the base file. EDLOG creates SAMPLE.DLD which is copied and renamed to SAMPLE.BSE. (.BSE files are optional depending on the application) 2. Use a text editor to create the SAMPLE.FMT file based on the DLDMOD syntax as described in this manual. 3. Compile the .FMT file to a .MEN file with the following COMPIL SAMPLE. If there are no errors, SAMPLE.MEN will be created. 4. Use the MAKEEXE.BAT file to create SAMPLE.EXE by typing MAKEEXE SAMPLE. The resulting SAMPLE.EXE and SAMPLE.BSE (created with EDLOG) are distributed to the customer. No other files should be distributed. The MAKEEXE.BAT does the following: COPY /B DLDMOD.EXE + SAMPLE.MEN SAMPLE.EXE When the customer runs SAMPLE.EXE it will use the responses and the SAMPLE.BSE file to create SAMPLE.DLD. This is used to program the datalogger. THE FILENAME.FMT FILE The structure of a FILENAME.FMT file is broken down into seven parts: 1. Subroutines used in the program 2. Variables declaration 3. Menus declaration 4. Windows Attributes 5. Text to go in each window 6. Compile section 7. Comments 1. SUBROUTINES SYNTAX: SUBROUTINE name instruction END name DESCRIPTION: To call the subroutine, use the command GOSUB Name. After the subroutine is executed, control will be switched to the line after the line that called it. EXAMPLE: SUBROUTINE Colors TextColor (yellow) TextBackground (blue) END Colors 2. VARIABLE DECLARATIONS SYNTAX: VARS name vartype size of box choices dialog box message default name2 vartype ... DESCRIPTION: Variable Name The name must be a unique string of letters ['A'..'z']. It can be up to eight characters long. Anything after that is ignored. Vartype May be one of the following: CHR ..........{single character} STR ..........{string} INT ...........{-32768..32767} REAL ..........{1.5e-45..3.4e38} When the program is actually run, and if the user types in something that doesn't correlate with the variable type expected, an error message is shown describing what expression is expected. The user is asked to input the answer again. If there is a default given, then it is displayed. Size of Box Must be in the range of 1 Max Size. This is the size of the box where the user is expected to input an answer. Choices A list of choices the user must choose from. As with the vartype, if the user doesn't enter something from this list, he is shown an error message. Enter "ALL" to allow any response of the correct type. Dialog Box Message When the program gets to this variable, the message on this line will appear in the dialog box. The message will automatically be word-wrapped to fit in the box. Enter the message on one line. To put a hard return in the message, break the two lines into two string literals separated by commas. Two commas in a row means a double return. See example AM32 below. Default The answer automatically shown on screen. If the FILENAME.FMT file was used before, the user is asked if the responses that were last given would like be used, or begin again with the default answers the FILENAME.FMT program supplied. The word "none" on this line means the box will originally show up blank. 3. MENU DECLARATION SYNTAX: OPTIONS option_name instruction to execute if selected dialog box message option2 instruction to execute if selected ... DESCRIPTION: Name of Option Write in the exact text you want highlighted if the user is ready to select this option. Executable Instruction Instruction to execute when menu is selected. Same as compiling instruction. Usually either DISPLAY or GOSUB instruction. Dialog Box Message Same as variable dialog box message. Appears in dialog box when user highlights this option. EXAMPLE: VARS DataloggerType STR 6 '21x', 'cr10', 'cr7' 'TMS supports either a 21x, CR10, or CR7 datalogger' none AM32 INT 6 4. WINDOW ATTRIBUTES SYNTAX: Windows name size attribute executable instr name_2 size ... Subwindows name_3 size attribute executable instr name_4 size ... DESCRIPTION: The SIZE of the window may be declared as "full" (takes up the whole screen) or in the following format: left marg, top marg, right marg, bottom marg Any of the following attributes may be selected by including it on the line. Frame A double frame will appear around the window when active. Dialogbox Draw a dialog box in this window. Clearback Determines how long the window will remain visible. With clearback enabled, the window will be removed after the first menu option is selected, as if the user pressed the ESC key. This option is normally used on subwindows. When clearback is used on a main window, that window is removed and the next window is displayed. If no more main windows exist, execution passes to the compile section. Pressing ESC will show the aborted window on any of the main windows. Rdonly If enabled, it will prevent the user from editing any of the variables or selecting any menu bars within the window. This option is useful for summary screens, etc. Help This declaration will cause the window to be displayed when the user presses F1. Normally this will be a HELP window. Only one window should use the HELP declaration. A default HELP window is provided if no window uses the HELP declaration. See the section on HELP. Executable Instruction Instruction to execute before the window is displayed. Enter "Nothing" if an instruction is not needed. EXAMPLE: <table> <thead> <tr> <th>WINDOWS</th> <th></th> </tr> </thead> <tbody> <tr> <td>Start</td> <td>;firstwindow</td> </tr> <tr> <td>Full</td> <td></td> </tr> <tr> <td>Frame, Clearback</td> <td>nothing</td> </tr> <tr> <td>Main</td> <td>;main window</td> </tr> <tr> <td>1,1,80,24</td> <td></td> </tr> <tr> <td>Frame, Dialogbox</td> <td>GOSUB Colors</td> </tr> <tr> <td>SUBWINDOWS</td> <td></td> </tr> <tr> <td>Userhelp</td> <td>;User defined help screen</td> </tr> <tr> <td>10, 5, 70, 15</td> <td></td> </tr> <tr> <td>Help, Frame</td> <td>nothing</td> </tr> <tr> <td>Summary</td> <td>;summary window</td> </tr> <tr> <td>1, 1, 80, 24</td> <td></td> </tr> <tr> <td>Clearback, Rdonly</td> <td>nothing</td> </tr> </tbody> </table> Windows in the window section will be displayed in the order listed. Windows in the subwindow section are not displayed unless a "DISPLAY window_name" command is used. The exception to this is a window declared with the HELP attribute set, which is displayed when F1 is pressed. 5. WINDOW TEXT SECTION SYNTAX: TEXT window name text to appear in window STOP TEXT window name2 text to appear in window STOP DESCRIPTION: Windows must be declared before the TEXT is declared. All declared windows must have a TEXT section. Within the text, type a carat mark (^) and the name of the variable everywhere you want to use a variable (must be previously declared). Everywhere you want a menu option to appear, type in a tilde mark (~) followed with the name of the menu option and another tilde mark. Mark the two corners of the dialog box with @DB. EXAMPLE: TEXT Initialize * Before going out to collect data, the user defines the following: 1. -Select Datalogger- 2. -Select Hardware- 3. -Select Measurements- 4. -Defining Reports- HELP @DB STOP @DB 6. THE COMPILIE SECTION SYNTAX: Compile instruction ;comment instruction1... End DESCRIPTION: Execution of the instructions in the Compile section begins when the user presses the F4 key or when the last main window is finished. Before these instructions are executed, all of the variables (answers) are stored in a file. On subsequent executions of the application, DLD MOD presents a window giving the user the option of using the last answers as defaults. The user may also choose to use the original default answers. Only the last set of answers are saved and any changes to variables in the Compile section are not saved. The variables are saved in a file with the same name as the .EXE file but using a .ANS extension. The executable instructions are described on the following pages. The Compiler is case insensitive, so it doesn't matter whether upper case or lower case letters are used. Commands, unless otherwise noted in the following pages, are expected to be one line each. Anything following a semicolon (outside of string literals) on a command line is ignored. The semicolon is optional. Executable Commands: Instruction Structure or Example File DldFile .................. DLDFILE = 'example'; Save .................. SAVE Entries/programs ChangeTo ............. CHANGETO 5 AT 1:4:2; Delete ............... DELETE 1:3; Insert ................. INSERT 2:5,2,1 AT 1:3; Renum ................ RENUM; Labels InsLabel ............ INSLABEL `measure' AT 4; Label .................. LABEL lbi1,,lbi2 AT 235; Comments Comment ............ COMMENT `Version 1' AT 4; Program Control If Then Else .......... IF (Reps = 1) THEN ELSE ENDIF Variables Inc ................. INC (number, step) Dec .................. DEC (number, step) Set .................. SET: variable = v Readonly ............ READONLY (variable)+ Windows Display ................ DISPLAY Window 5 Print .................. PRINT PRN'; Hookup1,A Colors TextColor .......... TextColor (yellow) TextBackground .. TextBackground (blue) BorderColor ...... BorderColor (black) RespColor .......... RespColor (blue) RespBackGnd.... RespBackGnd (yellow) FrameColor ......... FrameColor (red) HelpText .......... HelpText (red) HelpBack .......... HelpBack (white) HelpFrame .......... HelpFrame (blue) MenuColor ......... MenuColor (red) MenuBack .......... MenuBack (blue) VarColor .......... VarColor (black) VarBack .......... VarBack (white) RoColor .......... RoColor (cyan) RoBack .......... RoBack (green) Misc. ClrWin ............... ClrWin RemoveWin .......... RemoveWin Type ................ Type 'Please wait' Typeln ............ Typeln `system is working' FILE DLDFILE defines which FILENAME.DLD file to use. To have DLDMOD edit an already existing FILENAME.DLD file, use the expression: DLDFILE = filename To have DLDMOD create a new FILENAME.DLD file, use the expression: DLDFILE = NEW dldname, datalogger type EXAMPLE: <table> <thead> <tr> <th>Expression</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>DldFile = 'example';--------</td> <td>(will edit the file example.bse file);</td> </tr> <tr> <td>DldFile = new 'example', '21x';</td> <td>(will create a new file called example.dld for the 21X);</td> </tr> <tr> <td>DldFile = new name, datalggrr;</td> <td>(will create a new file using the string stored in name with the datalogger type stored in datalggrr);</td> </tr> </tbody> </table> If no datalogger type is specified when creating a new FILENAME.DLD file, it defaults to CR10. Note that it is unnecessary to specify the datalogger type when editing an existing file. DLDMOD will attempt to load a file with the extension .BSE if no extension is specified. A runtime error (NOT a compile error) will occur if a file specified for editing (i.e. not as NEW) is not found. For this reason, it is good practice to use the DLDFILE command as the executable instruction for the first window displayed (or as soon as possible). Otherwise, the user may complete the entire windows section only to have the program abort with an error because the specified file was not found. SAVE When the DLDFILE is finished being editing, use the SAVE command to save the changes. EXAMPLE: <table> <thead> <tr> <th>Expression</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Save;---------------</td> <td>(Saves the file as the name given with the DldFile = command. If no extension was specified with dldfile= then .dld is used.)</td> </tr> <tr> <td>Save 'Rain2';--------</td> <td>(Saves the file as Rain2.dld );</td> </tr> <tr> <td>Save 'Rain2.ddd';-----</td> <td>(Saves the file as Rain2.ddd);</td> </tr> <tr> <td>Save rain2;-------------</td> <td>(Saves the file as the name given to the variable rain2);</td> </tr> </tbody> </table> The default extension for saving is .DLD. DLDMOD will overwrite any existing files without prompting when told to do so. It is good practice to NOT overwrite the original file, but to save the changed file under a different name. This allows the user to start over at any time. PROGRAMS / ENTRIES CHANGETO To change a specified mode, parameter, program, or table interval, use the expression: CHANGETO xxxx AT mode#:entry#:parameter#; EXAMPLE: <table> <thead> <tr> <th>Expression</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>Changeto 45 at 1;--------</td> <td>(changes the first program table interval to 45 seconds);</td> </tr> <tr> <td>Changeto 95 at 1:6;--------</td> <td>(changes the 6th program of table 1 to 95);</td> </tr> <tr> <td>Changeto -16 at 1:5:4;------</td> <td>(changes the fourth parameter of the fifth entry of table 1 to 16--. If the fourth parameter had not already existed in the fifth entry, then it is inserted at the correct place with the correct parameter);</td> </tr> <tr> <td>Changeto tc1 at wh:wh2:wh3;</td> <td>(numeric variables can also be used);</td> </tr> </tbody> </table> DELETE xx:yy : deletes the yyth entry in the xxth mode. No other entry numbers are changed. **EXAMPLE:** Delete 1:5;----------------- [will delete the fifth entry in the first mode]; **INSERT** xx AT y:zz: inserts xx program at the zz entry in mode y. **EXAMPLE:** Insert 5 at 1:3;----------------- [will insert program 5 at the third entry of table one. If there is already a program in that spot, then DLDMOD will insert it as the 2.001 entry. If there is already a 2.001 entry, then it will insert the program as 2.002 and so forth.]; **INSERT** 15:2,3 AT 2:36;----- [inserts P15 (with 2 as the first parameter and 3 as the second) as the 36th spot of table 2.]; **INSERT** tc1 AT tc2:tc3;----- [make sure these variables hold numbers]; **RENUM** This is used for cleaning up the FILENAME:DLD program after you're done changing it. It renumbers the entries by integers. Programs don't necessarily need to be renumbered in order to run. Rather, the Renum program makes the program easier to read. **EXAMPLE:** Renem; **LABELS** **INSLABEL** Inserts label(s) at a specific location. If there is already a non-blank label at the given location, then it and the non-blank labels following it are moved over until there are enough blank labels to compensate for the inserted label(s). **EXAMPLE:** **INSLABEL** 'NewLabel' AT 4;-----[will take the first set of la- bel(s) and change it to the second set of labels]; :;Cntr :DF2 mV:Vx mV:Batt V :Pulse ch2 ::Pulse ch1:----------:----------:Fixed #1 :Fixed #2 :;Cntr :DF2 mV:Vx mV:NewLabel :Batt V ::Pulse ch2:Pulse ch1:----------:Fixed #1 :Fixed #2 Label follows the same rules as the Inslabel command as far as inserting more than one label at a time, inserting repetitive labels, using variables, and inserting at which locations. The only exception is: Label 'Cntr', 'Pulse ch1' at 3;-----[will change the 3rd 5th la- bel(s, leaving the 4th label alone)]; **EXAMPLE:** **INSLABEL** 'Cntr', 'Pulse ch1' at 3; [will NOT work, and will flash an error when compiled]; Otherwise: **INSLABEL** 'Cntr' at 236;------[inserts the label Cntr at 236 and moves the follow- ing labels over as needed]; **INSLABEL** 'Cntr';----------------- [inserts Cntr as the first label]; **INSLABEL** 'Cntr', 'VarLabel,' at 5;---[inserts the label Cntr as the fifth label, value in VarLabel as the sixth, and a blank label in the seventh. The eighth label on down are moved over as needed]; **INSLABEL** 'test' at 5:236 at 10;---[inserts test#s to test#236 starting at the 10th location. All following labels are moved forward as needed]; **LABEL** Changes a label or labels at a given location. **EXAMPLE:** **LABEL** 'Cntr' at 236;----------------- [changes the 236th label to Cntr]; **LABEL** 'Cntr';----------------- [changes the first label to Cntr]; **LABEL** 'Cntr', 'DF2 mV', 'Vx mV'; [changes the first, second, and fourth labels to labels indicated. The third label is ignored]; **LABEL** 'Cntr', 'VarLabel,' at 5;----[changes the fifth label to Cntr, the sixth label to the value stored in varLabel, and deletes the seventh la- bel, replacing it with a blank label]; **LABEL** 'Fixed' at 5:1.5, 'VarLabel' at 10;-----[changes the first 15 labels, the first 5 being Fixed #1 to Fixed #5 and the next 10 being the value of varLabel numbered one to 10]; **7. COMMENTS** **COMMENT** Adds comments to the .DLD file. Useful for documentation purposes. Comments within a .DLD file are always preceded by a (-). The COMMENT command also adds a (--) character to differentiate comments it adds from other comments (e.g., labels). When modifying comments, only those with the (--) character are counted or replaced. This instruction will overwrite an existing comment at the specified location. February 8, 1996 EXAMPLE: ``` COMMENT 'Version 1.2' AT 1 COMMENT Name AT 2 ; (name is a string variable) ``` PROGRAM CONTROL **IF THEN ELSE** The If Then Else works similarly to other high-level languages, looking like: ``` IF (boolean expression) THEN executable instruction(s) ELSE executable instruction(s) ENDIF ``` **EXAMPLE:** ``` If (TcTemp = 'S') then Display SingleEnded; endif; if (Temp = 'C') then set: mult = 1; set: offset = 0; else set: mult = 1.8; set: offset = 32; endif; if (Reps > 14) then Print Hookup1; else if (reps < 5) then Print Hookup2; endif; endif; ``` VARIABLES **INC** and **DEC** Increment and decrement (respectively) any variable, v, by step counts. Step is an optional parameter, and if not specified, the variable will be incremented or decremented by one. **SYNTAX:** ``` INC (v, step) or INC (v) DEC (v, step) or DEC (v) ``` **EXAMPLE:** ``` inc (number, 2); --------------- (number := number + 2); inc (number);------------------- (number := number + 1); dec (number, 3); --------------- (number := number - 3); dec (number);------------------- (number := number - 1); ``` **SET** Sets a variable equal to another variable, a constant, or expression. **EXAMPLE:** ``` Set: reps = 12;------------------- (sets the variable reps equal to 12.); ``` Math is also allowed in the numeric expressions. Concatenation is allowed with string variables. The following operators are allowed: - Addition for numeric concatenation for string. - Subtraction (unary minus also allowed) - Multiply - Divide - Modulo - Exponential - Parentheses (used to alter precedence) **EXAMPLE:** ``` Set: loc = loc * reps Set: temp = offset * (val3 + val2) Set: lastname = name + 'Smith' ``` **READONLY** Marks a variable as READONLY or as NOT READONLY. A variable marked as READONLY will be displayed normally on the screen except it can't be highlighted or edited by the user (i.e. no edit box, and the cursor will skip the affected variable when cursor is moved). All variables default to NOT READONLY. **SYNTAX:** ``` Readonly (variable) option ``` Valid options are: - (+) makes the variable READONLY. - (-) makes the variable NOT READONLY. The READONLY attribute of variables has no effect on the SET command. Only the window display and editing are affected. WINDOWS **DISPLAY** This command displays a window on the screen for the user. Once the user is finished with the window, the window is removed and the previous window is restored. **SYNTAX:** ``` DISPLAY winname ``` **EXAMPLE:** ``` Display W1; ``` **PRINT** This command prints windows to either a text file or directly to the printer. The file can be appended to or newly created. The command structure is: ``` PRINT 'filename': windowname, option; ``` If you want to print directly to the printer, type PRN as the file name. Valid options are: - **A** Append to existing file, create file if it does not exist. - **O** Overwrite any existing file, create file if it does not exist. Either option can be used for direct printer output. **EXAMPLE:** Print PRN : intro,A;----------{prints the windows Intro, SessionA, and Hookup1 to the printer}; Print 'HookUp.Prn': SessionA, O; {prints the window SessionA to the text file 'hookup.prn' overwriting 'hookup.prn if it exist.}; Print CustmName : SessionA,A; {appends the window SessionA to a text file named the value stored in variable CustmName}; If the printer (PRN) is selected but is not ready, a run time warning is given and the program continues. Nothing is printed, but otherwise execution is normal. **COLORS** **TEXTCOLOR** through **HELPFRAME** The writer of the FILENAME.FMT program may change the various colors the user will see via these instructions. Each of these instructions requires a color as its parameter. Possible colors are: - black - blue - green - cyan - red - magenta - brown - light gray - white **CLRWIN** Clears the active window or clears the screen if no windows are displayed. The active window is not removed; it is only cleared. **REMOVEWIN** Removes the active window. If the active window is the last window (not a SUBWINDOW) then the windows section is left as if F4 had been pressed. **TYPE** Writes the quoted string to the active window. Writes to the screen if no windows are active. No carriage return is sent at the end of the line so subsequent writes will be on the same line. **TYPELN** Same as Type, only it places a carriage return at the end of the line. **HELP** The default pop-up HELP window tells the user about the following keys: - **F1** Display help screen. - **F3** Leaves the program, abandoning all of the changes that were made unless the program is already in the Compile section. If in the Compile section, changes may or may not be saved. The program asks if the user would like to quit. - **F4** Leaves the window section and begins the Compile section. **Cursor Movements** The arrow keys, HOME, END, PAGE UP, and PAGE DOWN keys will move the cursor to the different responses. The ESC key removes the current window. HELP window can be changed by the developer. If a window is declared with the HELP attribute set, the text in it will be displayed instead of the default message. See the section on window declarations.
{"Source-Url": "https://s.campbellsci.com/documents/us/manuals/dldmod.pdf", "len_cl100k_base": 11675, "olmocr-version": "0.1.50", "pdf-total-pages": 38, "total-fallback-pages": 0, "total-input-tokens": 104770, "total-output-tokens": 13082, "length": "2e13", "weborganizer": {"__label__adult": 0.00025582313537597656, "__label__art_design": 0.00025653839111328125, "__label__crime_law": 0.0002155303955078125, "__label__education_jobs": 0.0004172325134277344, "__label__entertainment": 5.143880844116211e-05, "__label__fashion_beauty": 8.940696716308594e-05, "__label__finance_business": 0.0003094673156738281, "__label__food_dining": 0.0001703500747680664, "__label__games": 0.0008802413940429688, "__label__hardware": 0.001255035400390625, "__label__health": 6.67572021484375e-05, "__label__history": 8.416175842285156e-05, "__label__home_hobbies": 6.93202018737793e-05, "__label__industrial": 0.00037384033203125, "__label__literature": 0.00011873245239257812, "__label__politics": 7.414817810058594e-05, "__label__religion": 0.00024247169494628904, "__label__science_tech": 0.001926422119140625, "__label__social_life": 3.2067298889160156e-05, "__label__software": 0.046722412109375, "__label__software_dev": 0.94580078125, "__label__sports_fitness": 0.00014770030975341797, "__label__transportation": 0.00020205974578857425, "__label__travel": 9.995698928833008e-05}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47870, 0.02667]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47870, 0.37921]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47870, 0.85127]], "google_gemma-3-12b-it_contains_pii": [[0, 50, false], [50, 50, null], [50, 50, null], [50, 50, null], [50, 50, null], [50, 50, null], [50, 3836, null], [3836, 3836, null], [3836, 7023, null], [7023, 7023, null], [7023, 8750, null], [8750, 8750, null], [8750, 12262, null], [12262, 12262, null], [12262, 15613, null], [15613, 15613, null], [15613, 19278, null], [19278, 19278, null], [19278, 22756, null], [22756, 22756, null], [22756, 26487, null], [26487, 26487, null], [26487, 27507, null], [27507, 27507, null], [27507, 29498, null], [29498, 29498, null], [29498, 31815, null], [31815, 31815, null], [31815, 34423, null], [34423, 34423, null], [34423, 38903, null], [38903, 38903, null], [38903, 42581, null], [42581, 42581, null], [42581, 45261, null], [45261, 45261, null], [45261, 47870, null], [47870, 47870, null]], "google_gemma-3-12b-it_is_public_document": [[0, 50, true], [50, 50, null], [50, 50, null], [50, 50, null], [50, 50, null], [50, 50, null], [50, 3836, null], [3836, 3836, null], [3836, 7023, null], [7023, 7023, null], [7023, 8750, null], [8750, 8750, null], [8750, 12262, null], [12262, 12262, null], [12262, 15613, null], [15613, 15613, null], [15613, 19278, null], [19278, 19278, null], [19278, 22756, null], [22756, 22756, null], [22756, 26487, null], [26487, 26487, null], [26487, 27507, null], [27507, 27507, null], [27507, 29498, null], [29498, 29498, null], [29498, 31815, null], [31815, 31815, null], [31815, 34423, null], [34423, 34423, null], [34423, 38903, null], [38903, 38903, null], [38903, 42581, null], [42581, 42581, null], [42581, 45261, null], [45261, 45261, null], [45261, 47870, null], [47870, 47870, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47870, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47870, null]], "pdf_page_numbers": [[0, 50, 1], [50, 50, 2], [50, 50, 3], [50, 50, 4], [50, 50, 5], [50, 50, 6], [50, 3836, 7], [3836, 3836, 8], [3836, 7023, 9], [7023, 7023, 10], [7023, 8750, 11], [8750, 8750, 12], [8750, 12262, 13], [12262, 12262, 14], [12262, 15613, 15], [15613, 15613, 16], [15613, 19278, 17], [19278, 19278, 18], [19278, 22756, 19], [22756, 22756, 20], [22756, 26487, 21], [26487, 26487, 22], [26487, 27507, 23], [27507, 27507, 24], [27507, 29498, 25], [29498, 29498, 26], [29498, 31815, 27], [31815, 31815, 28], [31815, 34423, 29], [34423, 34423, 30], [34423, 38903, 31], [38903, 38903, 32], [38903, 42581, 33], [42581, 42581, 34], [42581, 45261, 35], [45261, 45261, 36], [45261, 47870, 37], [47870, 47870, 38]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47870, 0.07364]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
0fea5212046df081a437d374a70d4e415a007afb
Where do we go from here? Research and commercial spoken dialog systems. Roberto Pieraccini IBM T.J.Watson Research Center 1101 Kitchawan Road, Route 134 Yorktown Heights, NY 10598 Juan Huerta Abstract The spoken dialog industry has reached a maturity characterized by a vertical structure of technology vendors, platform integrators, application developers, and hosting companies. At the same time industrial standards are pervading the underlying technology and providing higher and higher levels of interoperability. On one hand commercial dialog systems are largely based on a pragmatic approach which aims at usability and task completion. On the other hand, spoken dialog research has been moving on a parallel path trying to attain naturalness and freedom of communication. However, the evolution of the commercial path shows that naturalness and freedom of expression are not necessarily a prerequisite for usability, given the constraints of the current technology. The difference between the two goals has been influencing a parallel evolution of the architectures and in particular of the dialog management abstractions. We believe it is the time to get a high level perspective on both lines of work, and aim to a synergistic convergence. 1 Introduction There are different lines of research in the field of spoken dialog. Some researchers attempt at understanding, and possibly replicating, the mechanisms of human dialog through linguistically motivated studies on human-human corpora. Others are interested in general design principles that, once applied, would result in usable human-machine user interfaces based on speech recognition and speech synthesis technology. Then, there is spoken dialog system engineering (McTear, 2004), which aims at developing programming styles, models, engines and tools which can be used to build effective dialog applications. The three lines of research are, in a way, orthogonal and complementary. The focus of the first is on understanding human communication, the second on designing the interface for usable machines, and the third on building those usable machines. The topic of this paper is concerned with the latter, namely the engineering of spoken dialog systems. However, every discussion on the engineering of dialog systems would be flawed if we did not take into consideration both the nature of human-human dialog—as this is the most efficient realization of spoken dialog available in nature—and the goal of usability. The goal of usability—i.e. building machines that are usable by untrained users—is often confused with that of building human-like conversational systems. This is based on the underlying tacit assumption that a machine that approximate human behavior—from the linguistic point of view—is certainly more usable that one that does not. Although possibly true in the limit, this assumption is often misleading, especially if we consider that the performance of spoken language technology today is still far from near-human performance. However, most of the research during the past decade was directed towards unconstrained natural language interactions, based on the assumption that naturalness and freedom of expression are the essential goals to pursue, and usability would automatically follow from having reached those goals. The limitation of current spoken language technology is a fact we have to live with. Thus, if we undertake the goal of building usable systems given that limitation, we would find that, for a large number of useful applications, naturalness and freedom of expression may actually hinder usability (Oviatt, 1995; Williams and Witt, 2004). For instance, let’s consider spoken lan- 1 With the term spoken language technology we refer to all the technologies that attempt the replication of human spoken language skills by machines, including speech recognition, spoken language understanding and translation, speech synthesis and text to speech. language understanding technology. In spite of the advances of the past decade, even in well defined domains, unrestricted understanding of speech is still far to be on a par with humans. So, any spoken language system that encourages free and natural user interactions is bound to a non-insignificant level of understanding errors. Moreover, as of today, there are no viable error recovery dialog strategies available for unconstrained natural language interactions. Conversely, there are several types of transactional applications that achieve high usability with interactions that are not natural and free. After all some call centers adopt scripts to be followed by their customer service representatives (CSR) which do not leave much freedom to callers. Most of the applications in this category are characterized by a domain model that is well understood by the user population. For instance, the model for ordering pizzas is known to most of the users: a number of pies of a certain size (small, medium, or large) with a selection of toppings (mushroom, pepperoni, etc.) The same applies to flight status domain model: flights can be on time, late, or cancelled. They arrive and depart daily from airports which serve one or more cities and can be identified by a number or by their itinerary and time. Banking, stock trading, prescription ordering, and many other services belong to the same category. Generally, when the domain model is quite simple and known by the users, as in the above cases, applications can be implemented in a structured dialog fashion, generally referred to as directed dialog. Directed dialog, even if seemingly more restrictive from the point of view of the user, can attain much higher usability and task completion rates that free form interaction does with the current technology. In fact, when users are prompted to provide specific pieces of information, the system can activate grammars designed to collect exactly that information. Moreover, as discussed in (Oviatt, 1995), user guidance reduces user disfluencies. Thus, the combination of user direction, strict grammars, and less disfluencies can attain quite high speech recognition rates. On the other hand, a more open interaction would increase the space of possible user expressions at each turn, thus causing a reduction of the recognition accuracy. Furthermore, without direct guid- ance, most users will be lost and would know neither what to say, nor what the capabilities and limitations of the system are. The concept that well structured directed dialog strategies may outperform natural language free-form interactions was realized by speech technology vendors during the early and mid 1990s. The development of a spoken dialog market during those years led to the rise, in the late 90’s, of a well structured industry of speech engines, platforms, and tool vendors, application developers, and hosting companies, together with an increased attention to the industrial standards. Several standards are today governing the speech industry, such as VoiceXML 2.0, MRCP2, SRGS, SSML, CCML, and EMMA. The speech and the Web world started to merge, and the benefits of this standardization trend took a momentum amplified by the simultaneous emergence of Web standards (e.g. J2EE, JSP, etc.). It is interesting to notice that the research community has often started from dialog approaches based on general principles (e.g. Grice, 1975) that once coded give machines a reasonable behavior for reacting to different dialog situations. Then, in order to cope with the limitations of the technology, research started falling back to more restrictive dialog strategies. In contrast, the commercial community started from a pragmatic approach, where each interaction is practically designed in the minimal details by voice user interface (VUI) experts (Barnard et al, 1999). After mastering the crafting of directed dialog applications, the commercial community is moving now towards more free form types of interactions. One example of that is with respect to those types of applications where directed dialog cannot be applied. Applications of this type are characterized by a domain model which is complex and --- 2 One of the problems arising when trying to implement error recovery in unconstrained speech is the automatic detection of recognition errors. In fact, today’s speech recognition confidence measures are still highly unreliable, especially when one attempts to apply them to portions of an utterance. Without viable error correction, interaction with machines may be extremely frustrating for the user. 3 As a matter of fact, human-human flight reservation generally follows a precise script that is dictated by the order of the entries in the CSR database. 4 http://www.w3.org/TR/voicexml20/ 8 Call Control Markup Language: a language for the control of the computer-telephony layer-- http://www.w3.org/TR/ccxml/. 9 Extensible Multi Modal Annotation: a language for the representation of semantic input in speech and multi-modal systems-- http://www.w3.org/TR/emma/. unknown to the majority of users. Help desk applications, for instance, fall in this class. For example, a directed dialog system for routing callers to the appropriate computer support may prompt user with: *Is your problem related to hardware, software, or networking?* But users, most likely, would not know which of the three categories would apply. A solution would be providing a menu that includes all possible problems, but it would be too large to enumerate, and building a grammar that captures all the possible expressions that can be used to describe all the possible problems is impractical. In other words, the underlying domain model is largely unknown or vague at best with respect to users. The solution to this problem consists in letting callers express themselves freely, and back the system with a statistical classifier able to assign utterances to one of the predefined categories. This technique, known as How May I Help You (Gorin et al., 1997), statistical call routing, or statistical natural language understanding (Chu-Carroll and Carpenter, 1999; Goel et al., 2005) is just a simplified form of language understanding which combines the robustness of a structured approach (a limited number of categories, or routes) with the flexibility of natural language (an open prompt leading to a large number of possible user expressions). In fact, the dialog can still be structured in a directed dialog manner, because the output of the interaction is going to be one of a predefined number of categories. 2 VUI Completeness The need for a detailed control of the VUI is thus an important factor driving the architectural and engineering choices in commercial dialog systems. We call this the *VUI-completeness* principle: the behavior of an application needs to be completely specified with respect to every possible situation that may arise during the interaction. No unpredictable user input should ever lead to unforeseeable behavior. Only two outcomes are acceptable, the user task is completed, or a fallback strategy is activated (e.g. escape to operator or an explicit failure statement is expressed). In order to ensure that an application is *VUI-complete*, its behavior needs to be specified for each possible situation, or class of situations. Today, a complete VUI specification is standard procedure in commercial deployments and it is generally represented by a graph that describes all the possible dialog states, complemented by tables that describe all the details of each state. Transitions between dialog states are described with conditions based on the user inputs and other pieces of information (e.g. previous user inputs, backend response, personal user information, etc.). The precise wording of system prompts is also specified in the design, along with an indication of the type of utterances accepted at each turn. The VUI specification document is then handed to a team of developers who subsequently implement the application using the platform of choice. In order to reduce development costs, it is thus important to guarantee a direct mapping between the formalisms and abstractions used by the VUI designers and the programming model available to the developer. This is the reason why, most of commercial dialog managers, follow the same abstraction utilized in the VUI specification. 2.1 Control and Expressiveness In order to allow developers to implement detailed VUI specifications, the programming paradigm adopted by the dialog manager or authoring tools should allow a fine control of the system behavior. However, a too low-level development paradigm my result in prohibitive development costs for large and complex applications. Hence the programming paradigm needs also to be expressive enough to allow implementing complex behavior in a simple and cost effective way. These two features are often competing, since in order to guarantee more expressiveness the dialog manager has to allow for sophisticated built-in behavior, which may be hard to bypass if one wants to attain a detailed control of the interface. An effective dialog manager is thus the result of a trade-off between control and expressiveness. This can be summarized by the following principle: *simple things should be easy, complex things should be possible.* 3 Dialog Management The design of a proper dialog management mechanism is thus at the core of dialog system engineering. The study of better dialog managers and proper dialog engineering is a way to aim to the reduction of application development costs. But it is also a way to move to more sophisticated human machine interactions, since it is only with proper engineering of dialog systems that we can raise the complexity threshold that separates what is realizable from what is not. There is not an agreed upon definition of what a dialog manager is; different systems described in the literature attribute different functions to it. Some of these functions are, for instance: integrating new user input, resolving ambiguities, confirming and clarifying the current interpretation, managing contextual information, communicating with the backend, managing speech recognition grammars, generating system outputs, etc. In fact, the minimal functionality required by a dialog manager covers two fundamental aspects of all interactive applications: keeping track of session states and deciding what the next action for the system to take is. Of course there are many ways of coding these two functions in order to achieve a desired interactive behavior. 4 Reference Architectures In order to describe different approaches to dialog management, it is important first to define, at a high level, the architecture of spoken dialog systems. Figure 1 shows a general functional architecture of a dialog system, mostly used in research prototypes. Input speech is collected via a telephone \(^{10}\) interface and dispatched to the speech recognition engine which provides one or more recognition results (for instance the -\(n\)-best recognition results). Each recognition result is then fed to a natural language understanding processor which extracts the semantics of the utterance. A formal representation of the semantics, generally a structured set of attribute-value pairs, is then passed on to the dialog manager. The dialog manager, based on the current utterance semantics, and on the stored contextual information derived from previous turns, decides the next action to take according to a dialog strategy. The most obvious action performed by the system as a response to a user utterance is a system utterance, or prompt, which can be generated as text and transformed into speech by a text-to-speech engine, or selected from a set of pre-recorded samples \(^{11}\). Other types of action performed by the dialog manager include interactions with the backend system, or any other type of processing required by the application. The above described architecture has been implemented in many different forms in research. Of particular interest is the Galaxy architecture (Seneff et al., 1999) which was used in the DARPA Communicator\(^{12}\) project and allowed interchange of modules and plug-and-play across different research groups. One thing to notice in the above described architecture is that the specific language models used by the speech recognition and natural language understanding engines are supposed to be constant throughout a whole session. In fact, one of the basic assumptions behind most research prototypes is that the system should be able to understand all the possible expressions defined by the language model at any point during the interaction. However it is clear that there is a correlation between the distribution of possible utterances and the dialog state or context. Thus in order to improve system performance, the dialog manager can change the parameters of the language model and language understanding depending on the current dialog context. Several systems did implement this feedback loop with resulting improved performance (Xu and Rudniky, 2000). Commercial system architectures evolved in a different way. The basic assumption on which most of the commercial deployed systems were based, and still are, is that properly designed prompts can effectively control the space of user expressions. If that’s true, at each turn, there is no need for the system to be able to understand all the possible expressions that users could say. Users are in fact directed (thus the term directed dialog) and enticed into speaking exactly what the system expects. It is clear how this assumption, if true, can potentially allow the attainment of very high task completion rates. Under this assumption, commercial dialog systems provide the speech recognizer with an appropriately designed grammar at each turn of the interaction. Each grammar—typically a SRGS standard context-free grammar with semantic attachments—is specifically designed to accept the utterances that are expected to be possible user reactions to the specific prompt played at that particular turn. So, instead of a generic prompt like Hello, this is XYZ flight status information line, how can I help you today? commercial dialog system designers use more specific prompts such as Are you interested in arrivals or departures? or From which city is the flight departing? The benefit of using restricted grammars in directed dialog applications becomes evident when looking at the error control logic typically adopted by commercial systems. In fact, even with very restricted grammars, there is always a chance for the recognizer to produce --- \(^{10}\) We refer here to telephone-based systems. However, the concepts expressed in this paper can be generalized to other types of system that do not make use of telephone communication, such as embedded systems for mobile devices and for automobiles. \(^{11}\) High quality prompts are today obtained by splicing pre-recorded phrases with TTS generated content, using concatenative speech synthesis. \(^{12}\) http://communicator.sourceforge.net/ erroneous interpretations, or for the user to speak utterances outside the domain. Thus in case of poor recognition scores, commercial dialog systems direct users to correct a presumably erroneous interpretation by using very strict prompts, such as: *I think you said Austin, is that correct?* Please say yes or no. And since the system cannot afford to confuse a yes with a no at this point in dialog (misrecognitions in correction sub-dialogs would lead to enormous user frustration), the grammar after this prompt is restricted to yes/no utterances and a reasonable number of synonyms. Early commercial dialog systems were built using proprietary architectures based on IVR (Interactive Voice Response) platforms. Soon, the speech application development community realized the importance of industrial standards and started to create recommendations to guarantee interoperability of platforms and engines. After the introduction of VoiceXML 1.0 in year 2000, conversational systems started to conform to a general Web architecture, such as the one shown in Figure 2. The convergence of speech and Web technologies (the so called Voice Web) has allowed the speech industry to leverage existing Web skills and resources, and reduce the need for specialized developers. ![Typical architecture of commercial dialog system.](image) The core of commercial dialog systems exemplified by Figure 2 is the *voice browser* which accepts documents written in a markup language specific for speech applications, such as VoiceXML. The voice browser exchanges information with a Web server using the internet protocol (IP) in analogy with the browser and server in traditional visual Web applications. VoiceXML markup documents instruct the browser to activate the speech resources (speech recognition, TTS, prompt player, etc.) with a specific set of parameters, such as a particular grammar for the speech recognition engine, a prompt to be synthesized by the text-to-speech system, or an audio recording to be played. Once user’s speech has been recognized, and the recognition results returned to the browser in the form of a structured set of variables, the browser sends them back to the to Web server, together with the request of another VoiceXML document. The Web server then replies by sending the requested document to the browser, and the interaction continues in this fashion. Using plain vanilla VoiceXML, the dialog manager function is actually distributed across the various VoiceXML documents. In fact each document includes instructions for the browser to request the next document once the current one has been executed. All the VoiceXML documents and the corresponding resources (such as grammars, prompts, etc.) are typically stored statically on the Web server and served to the browser upon request. However, as it happened in the visual Web world, developers found the mechanism of encoding the whole system in static VoiceXML pages quite limiting, and soon they started to write programs on the server for generating dynamic VoiceXML documents. In this case the application is actually managed by a program running on the application server, which acts as a dialog manager. The introduction of the J2EE/JSP technology makes this process straightforward and in line with mainstream Web programming. Generating VoiceXML dynamically on the server has the advantage of providing the developer with more powerful computational capabilities than those available on the voice browser client, and thus accommodating in a more flexible way the dynamic nature of sophisticated interactions and business logic. Moreover, there are security restrictions on the client that may prevent direct access to external resources, such as backend databases. The evolution of server based programming of applications brought the separation of the dialog management functionality from the presentation (i.e. the activation of speech engines, playing of the prompts, etc.), and the realization of general purpose dialog managers and programming models for developing speech applications on the server. In spite of the different architectural evolution of research and commercial dialog systems, the need for a powerful dialog manager is felt by both communities. In the next few sections we will discuss some of the available models of dialog manager which have been introduced in recent years. ### 5 Programmatic Dialog Management The simplest form of dialog manager is a generic program implemented in C++ or Java (or as a Java servlet in the case of Web based architectures) implementing --- 13 Voice browsers use caching strategies similar to those used by visual Web browser. So, large grammars may be cached on the client and thus avoid large resource provisioning latency. the application without an underlying generic interaction model. Early commercial dialog applications were typically developed on the deployment platform as native code following a given VUI specification. Before the advent of VoiceXML and the Web programming paradigm for voice applications, IVR vendors integrated speech recognition engines directly in their platforms which had proprietary programming environments or proprietary APIs. However, building each application from scratch becomes soon an inefficient and repetitive activity. Like in all areas of software development, vendors tried to reduce the cost of application development by introducing libraries of reusable functions and interaction templates, often for internal consumption, but also as products that could be licensed to third parties. Libraries were also complemented by programming frameworks, generally in the form of sample code or templates, which could be reused and adapted to different applications. Dialog modules, developed by various speech recognition and tool providers, constitute one of the first forms of commercial reusable dialog functions. Dialog modules encapsulate all the low level activities required to collect one or more pieces of information from the user. That includes prompting, re-prompting in case of rejection and timeout, confirmation, disambiguation, etc. The collection procedure, including prompts, grammars, and logic for standard pieces of information, such as dates, times, social security number, credit card numbers, currency, etc., was thus encoded once and for all in pieces of reusable and configurable software. Developers could also build their own custom dialog modules. Thus dialog modules became, for many, the standard approach to directed dialog. Applications were then implemented with the programming model available for the chosen platform. Each state of the dialog flow was associated to a specific dialog module, and the programming model of the platform was the glue used to implement the whole dialog. ### 6 Finite State Control Management Finite state control dialog manager is an improvement on the programmatic dialog manager. The finite state control dialog manager implements a separation between the logic of directed dialog and its actual specification. The logic is implemented by a finite state machine engine which is application independent and thus reusable. Thus, rather than coding their own finite state machine mechanism, developers had to a description of the finite state machine topology in terms of a graph of nodes and arcs. Often the topology could be derived from the VUI specification. Then developers had to complement that with a set of custom functions required by the application. Without a separation between the finite state machine mechanism and its topology, the implementation of the dialog state machine logic was often left to the programming skills of developers, often resulting in an unmanageable spaghetti-like nest of if-else or case statements, with increased debugging and maintenance costs, and made it impossible to build applications above a certain level of complexity. One of the obvious advantages of the finite state control management approach is that the topology of the finite state machine is generally easier to write, debug, and maintain than the finite state machine mechanism itself. Moreover, the finite state machine engine can allow for hierarchical and modular dialog definition (e.g. dialogs and sub-dialogs). Finally, the engine itself can be harnessed to verify the overall topology, check for obvious design and implementation mistakes, such as unreachable nodes, loops, etc., and provide debugging and logging facilities. More sophisticated engines can have built-in behavior, like for instance handling specific navigation across dialog networks, recording usage information for personalized services, implementing functions such as back-up and repeat, etc. (Pieraccini et al., 2001). The simplest form of finite state control dialog manager is built around the concept of call-flow developed initially for IVR systems. In its simplest realization a call flow is a graph where the nodes represent prompts, and the arcs represent transitions conditioned on the user choice (e.g. Figure 3). By navigating the call flow graph and selecting the right choices, the user can reach the desired goal and complete the task. The call flow model is quite limited and breaks for complex dialog systems since one has to explicitly enumerate all the possible choices at any node in the dialog. In fact the pure call-flow model is inadequate to represent even modest levels of mixed initiative, such as over-specification, i.e. more than on piece of information in a single utterance. For instance, if asked for the date of a flight in a mixed initiative system that allows for over-specified requests, users may instead respond --- 14 Some platforms used GUI application development environments that were originally designed for touch-tone (DTMF) applications, and then extended for handling speech recognition and TTS. Others allowed access to the functionality of the IVR and the speech recognition/TTS engines through a published proprietary API, that could be used in C, Java, Visual Basic, etc., 15 It looks like the spoken dialog community has a penchant for applications related to flights. We hope to see other domains of interest in the future. with any subset of date, origin, destination, and airline. In order to be able to handle this, the simple call flow model would need to represent explicitly all the possible subsets of user choices (e.g. date, date + time, date + origin, ... date + origin + destination, ...) making the design and development impractical. However, one can easily extend the concept of call-flow and allow the state machine to assume any topology, to invoke any arbitrary function (action) at each node, and assume any arbitrarily complex condition on the arcs. Furthermore, one can allow any arbitrarily complex data structures (session state) to be writable and readable by the actions associated to the nodes. In this new extended form, the finite state control dialog manager (we will refer to it as the functional model) has enough expressive power to represent sophisticated directed dialog and mixed initiative interactions. A full functional model of dialog management can also allow for recursion, i.e. full dialogs specified in a functional fashion can be, themselves, used as actions and associated to nodes of a higher level dialog, enabling thus hierarchical description of applications, and promoting modularity and reuse. An example of a control graph that handles over-specified utterances is shown in Figure 4 (it will be explained later in this paper). More detailed descriptions of functional models of dialog management can be found in (Pieraccini et al, 1997; Pieraccini et al., 2001). There are common misconceptions about the effective expressive and computational power of the finite state dialog model. In fact it is often attributed limited capabilities with respect to more sophisticated abstractions. This misconception derives from the confusion between a simple call flow model, which is completely described by a state machine with prompts on the nodes and choices on the arcs, and the richer functional model described above. In its simpler form the call flow model is indeed, computationally, a finite state model of dialog: i.e. the state of the dialog is univocally determined by the node of the call flow. In contrast, the functional model allows arbitrary functions at each node to manipulate arbitrary memory structures that can be shared across nodes. Thus the extended functional model is not, computationally, a finite state model of dialog; it just makes use of a finite state representation for the dialog control mechanism. In fact each node of the finite state machine describing the dialog control does not represent univocally the state of the dialog, because we need also to take in consideration the state of all the memory structures associated with the controller (e.g. the session state). A functional dialog manager is equivalent to a procedural program with a fixed structure based on nested conditional or case statements. The nodes are equivalent to function calls, while the conditions are equivalent to the conditional statements, and a whole dialog is analogous to the definition of a function. However, a functional dialog manager specification is much easier to author and debug than a set of nested conditional or case statements.\footnote{As a proof of this, we leave to the reader the exercise of rewriting the controller in Figure 4 as a series of nested if-else-if-else statements.} 6.1 Handling Mixed Initiative in Functional Models A clear limitation of functional models is in that they often require a complete topological definition of the task that may be rather complex for certain types of applications. For instance, the implementation of mixed initiative interactions may result in a control graph with a large, unmanageable number of arcs. One way to reduce the cost of designing and developing mixed initiative dialog applications within the functional model paradigm consists in providing the controller engine with a behavior that corresponds to complex topologies, without the need for the developer to specify those in term of nodes and arcs. For example, in (Pieraccini et al., 2001), the concept of state transition was extended to include special GOTO and GOSUB arcs to easily implement topic changes and digressions at any node of the dialog. Powerful engines for functional dialog models can also allow for effective authoring of global transitions that apply to whole sets of nodes. 6.2 Fixed Topology Models One can implement functional dialog managers that allow the developer to specify the control graph topology (Carpenter et al., 2002). On the other hand one could restrict the control graph to assume a fixed topology and allow developers to specify only a limited number of parameters. The Form Interpretation Algorithm (FIA), the basis for the VoiceXML standard, is an example of a functional model of dialog management with a fixed topology. The topology of the FIA controller is in fact shown by the example in Figure 4. The FIA topology is particularly suited for handling over-specified requests, allowing filling forms with multiple-field forms in any order. For instance, if after the initial question Which flight? the user specifies the destination and the airline, the arc !origin is traversed and the node origin? is executed next. As a result the user is asked to provide the origin of the flight. Then, the date? node is executed, next, since the condition !date? is true. After the user has provided all the required pieces of information (origin, destination, airline, and date) the sub-dialog exits through node 3. Another example of functional model with a fixed topology controller is the MIT dialog management system (Seneff and Polifroni, 2000). In this case the control is defined by a sequence of functions that are activated when the conditions associated to them fire. Each function can modify a session state (i.e. a frame memory structure) by adding additional information, including a flag which instructs the controller on what to do next. Possible flags are: CONTINUE, causing the execution of the next rule in the sequence, RETURN, causing the controller to return to the initial rule, or STOP the execution. Again, as in the VoiceXML case, developing a dialog does not require the description of the control graph, which has the functional form described by Figure 5, but the specification of the functions associated to the nodes, and the conditions. The following is an example of a set of rules that implement the same sub-dialog as the one in Figure 4. !origin → prompt_origin() !destination → prompt_destination() Figure 5. Functional control graph representing a rule based system. 7 Inference Based Dialog Managers We have shown in the previous section how several forms of dialog manager can be reduced to a unique underlying model: the functional finite-state dialog controller. The difference between them is in whether developers are allowed to change the topology of the controller, and in the type of authoring (e.g. graph or rules). However, there are classes of applications for which a specification through a finite state controller may result impractical. As we discussed earlier, transactional applications with a well defined goal (e.g. giving information to the user, selling or buying, etc.) can often be effectively implemented with a finite state controller. On the contrary, applications of the problem solving type (Allen et al., 2000) require a higher degree of planning, for which the finite state controller can be quite inappropriate. These types of applications, as of today, have not yet found a channel to the market of spoken dialog systems, partially because they are not yet at a level to demonstrate commercially viability. In fact their deployment still requires specialized development teams and is thus quite expensive. Moreover the performance of the resulting systems is not yet at the level required for a commercial exploitation. In spite of its difficulty, the research community has been actively pushing the technology towards the solution of the dialog management problem for complex systems, especially under the auspices of the DARPA Communicator program. Successful prototypes have been demonstrated and tested based on sophisticated dialog managers that deviate from the finite-state con- controller model, and include some degrees of inference. A distinguishing feature of the inference based systems is that they refrain from to attempting at a more or less explicit description of the relationship between states and actions, as in the finite state controllers, but rather resort to engines that draw decisions on the next action to perform based on a general strategy and on a formal description of the domain, typically in terms of goals and sub-goals. Thus, in order to develop an application, one starts from a formal description of the domain model in such a way to allow the inference engine to drive the system to a cooperative solution. In (Stallard, 2001) the dialog control model is described by a tree representing the goal/sub-goal structure, with the leaves of the tree being the actions. Actions, which include conditions for their execution, are associated to individual goals. Internal nodes represent conditional controls on the execution of the underlying nodes. A dialog manager based on task ontology and a hierarchy of nodes is described in (Pellom et al., 2000). The dialog manager described in (Wei and Rudnicky, 2000) constructs a dynamic structure, called agenda, which is practically a list of sub-goals, where each sub-goal corresponds to the collection of some piece of information. A task is completed when all the items in the agenda are completed. The agenda is created, dynamically, by traversing a tree (i.e. the product tree) that describes the task to accomplish at any point in time. The product tree is dynamically created since the nature of the task may be dynamic as well (e.g. the number of legs in a flight is determined during the interaction and not known beforehand). In the form based dialog manager described in (Papineni, 1999) the inference mechanism is driven by a numerical function computed on a set of partially completed forms (i.e. sets of task-relevant slots), based on how close each individual hypothesized form is to the goal (i.e. the retrieval of information from the database).17 Another line of research is based on statistical learning of the dialog strategy using mathematical models derived from statistical machine learning, such as Markov Decision Process (Levin, 2000) or Bayesian network frameworks (Meng, 2003). It is still too early to be able to understand whether automated design of dialog can allow building usable systems whet a quality comparable to that of those designed by VUI expert designers. It is not yet clear whether any of the sophisticated inference dialog managers developed in research could be effectively used for mass production of commercial systems. One of the problem is that their behavior is quite complex, and it may be difficult to predict all possible situations that could arise during the interaction. Thus VUI completeness may be hard to achieve. Research prototypes, so far, have been built by researchers with an intimate knowledge of the quirks of the dialog manager itself. Thus, in order to succeed in the commercial arena, inference engines have to produce systems with usability comparable or superior to that of an equivalent directed dialog for the same task, or provide services (e.g. problem solving applications) that cannot be provided with directed dialog, still with usability as the main goal. VUI completeness is an essential requirement which should be seriously taken into proper consideration for the more sophisticated dialog manager models. 8 Current Industrial Trends Reusable components (Huerta et al, 2005) and prepackaged applications are the main trends of the industry of spoken dialog systems today. Componentization and reuse effectively allow reducing deployment costs and risks and, at the same time, simplifying the design and development of more sophisticated applications. Thus the commercial world is approaching the creation of more complex applications through more and more sophisticated building blocks which allow reuse and interplay. 9 Conclusions The way applications are authored, what capabilities the systems have, and the overall usability that is eventually perceived by users reflect the different goals that research and industry have in the field of spoken dialog systems. Whereas usability and cost effectiveness are the primary goals of the commercial community, research has traditionally aimed at naturalness of interactions and freedom of expression. However, often the latter does not necessarily lead to the former. The actual form assumed by dialog managers in both communities is the consequence of those different goals. In fact, in order to achieve high usability, commercial deployments aim at having completely definable interfaces (control and VUI completeness), using efficient languages and architectures (expressiveness and simple-things-should-be-easy) while keeping the ability to achieve complex levels of interaction (complex-things-should-be-possible). At the same time, the focus of research is towards abstracting, validating and achieving complex levels of natural interaction. While at first glance both sets of goals might seem in conflict, we believe that an evolution towards more complex level of interaction while using an effective development --- 17 A commercial version of this dialog manager was implemented by IBM and used in a financial application (T.Rowe Price). framework and implementing a “controllable” (VUI complete) interface is possible. We have shown that most commercial dialog management abstractions fall into the functional finite-state controller mechanism, as well as some of the dialog managers developed in research. The difference is in the constraints applied to the topology of the controller and in the type of authoring (graphs vs. rules). We have also shown that there is a second category of dialog managers, inference based, which is devoted to handle more complex interactions, such as problem solving applications. VUI-completeness is required for them to become viable and reach the level of usability needed to succeed in the commercial arena. We believe that the authoring of applications should be aligned with the model used at design time, and possibly to the runtime environment. In this way efficiency can be achieved at all levels: design, development, and deployment. The framework should allow for the encapsulation of dialog mechanisms into templates, components, and subroutines that abstract behaviors. Beyond allowing for a reduction of development costs, this is also the first step towards the implementation of more complex interaction mechanisms. Finally, the framework should have strict “directed” and thus controllable default behavior, but at the same time should allow for more complex interactions to be triggered if and when these dialog mechanisms would benefit the interaction (e.g., power users). We believe that a consolidation of the goal priorities (i.e. usability and naturalness of interaction) between research and the commercial world will foster further maturation of the technology. For this to happen, though, the dialog needs to start. References
{"Source-Url": "http://www.isca-speech.org/archive_open/sigdial6/sgd6_001.pdf", "len_cl100k_base": 8391, "olmocr-version": "0.1.48", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 34559, "total-output-tokens": 10272, "length": "2e13", "weborganizer": {"__label__adult": 0.000530242919921875, "__label__art_design": 0.000797271728515625, "__label__crime_law": 0.0004377365112304687, "__label__education_jobs": 0.0014801025390625, "__label__entertainment": 0.0003383159637451172, "__label__fashion_beauty": 0.00021278858184814453, "__label__finance_business": 0.00027561187744140625, "__label__food_dining": 0.0004267692565917969, "__label__games": 0.0009083747863769532, "__label__hardware": 0.00209808349609375, "__label__health": 0.0009355545043945312, "__label__history": 0.0004119873046875, "__label__home_hobbies": 7.599592208862305e-05, "__label__industrial": 0.0004758834838867187, "__label__literature": 0.0010156631469726562, "__label__politics": 0.00037789344787597656, "__label__religion": 0.0006327629089355469, "__label__science_tech": 0.2095947265625, "__label__social_life": 0.0001327991485595703, "__label__software": 0.0259246826171875, "__label__software_dev": 0.75146484375, "__label__sports_fitness": 0.0003120899200439453, "__label__transportation": 0.0007615089416503906, "__label__travel": 0.0002262592315673828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48811, 0.02644]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48811, 0.65499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48811, 0.92821]], "google_gemma-3-12b-it_contains_pii": [[0, 3967, false], [3967, 9599, null], [9599, 15096, null], [15096, 19718, null], [19718, 24499, null], [24499, 29951, null], [29951, 33884, null], [33884, 38219, null], [38219, 43600, null], [43600, 48811, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3967, true], [3967, 9599, null], [9599, 15096, null], [15096, 19718, null], [19718, 24499, null], [24499, 29951, null], [29951, 33884, null], [33884, 38219, null], [38219, 43600, null], [43600, 48811, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48811, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48811, null]], "pdf_page_numbers": [[0, 3967, 1], [3967, 9599, 2], [9599, 15096, 3], [15096, 19718, 4], [19718, 24499, 5], [24499, 29951, 6], [29951, 33884, 7], [33884, 38219, 8], [38219, 43600, 9], [43600, 48811, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48811, 0.0]]}
olmocr_science_pdfs
2024-11-23
2024-11-23
c8d16fabb1a9bfc4b3f39dd6fddcb8b34891ea8d
[REMOVED]
{"Source-Url": "http://iswc2011.semanticweb.org/fileadmin/iswc/Papers/Research_Paper/03/70310448.pdf", "len_cl100k_base": 9119, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 43814, "total-output-tokens": 10874, "length": "2e13", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.0005822181701660156, "__label__crime_law": 0.0005497932434082031, "__label__education_jobs": 0.0014238357543945312, "__label__entertainment": 0.00016772747039794922, "__label__fashion_beauty": 0.00021982192993164065, "__label__finance_business": 0.0007824897766113281, "__label__food_dining": 0.00034809112548828125, "__label__games": 0.0006895065307617188, "__label__hardware": 0.0008153915405273438, "__label__health": 0.0005769729614257812, "__label__history": 0.0005335807800292969, "__label__home_hobbies": 0.00011402368545532228, "__label__industrial": 0.0004696846008300781, "__label__literature": 0.0005898475646972656, "__label__politics": 0.0003814697265625, "__label__religion": 0.0004839897155761719, "__label__science_tech": 0.147705078125, "__label__social_life": 0.00017142295837402344, "__label__software": 0.059417724609375, "__label__software_dev": 0.78271484375, "__label__sports_fitness": 0.00021731853485107425, "__label__transportation": 0.0004489421844482422, "__label__travel": 0.0002694129943847656}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44560, 0.03463]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44560, 0.21441]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44560, 0.91571]], "google_gemma-3-12b-it_contains_pii": [[0, 2536, false], [2536, 6173, null], [6173, 9324, null], [9324, 11943, null], [11943, 15286, null], [15286, 18488, null], [18488, 20831, null], [20831, 23262, null], [23262, 26631, null], [26631, 26800, null], [26800, 30141, null], [30141, 33667, null], [33667, 35445, null], [35445, 38475, null], [38475, 41358, null], [41358, 44560, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2536, true], [2536, 6173, null], [6173, 9324, null], [9324, 11943, null], [11943, 15286, null], [15286, 18488, null], [18488, 20831, null], [20831, 23262, null], [23262, 26631, null], [26631, 26800, null], [26800, 30141, null], [30141, 33667, null], [33667, 35445, null], [35445, 38475, null], [38475, 41358, null], [41358, 44560, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 44560, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44560, null]], "pdf_page_numbers": [[0, 2536, 1], [2536, 6173, 2], [6173, 9324, 3], [9324, 11943, 4], [11943, 15286, 5], [15286, 18488, 6], [18488, 20831, 7], [20831, 23262, 8], [23262, 26631, 9], [26631, 26800, 10], [26800, 30141, 11], [30141, 33667, 12], [33667, 35445, 13], [35445, 38475, 14], [38475, 41358, 15], [41358, 44560, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44560, 0.0625]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
82a6a9ae79682a3162211ed686ce5a60be1c27b3
Neither the whole or any part of the information contained in, or the product described in, this manual may be reproduced in any material form except with the prior written approval of Acorn Computers Limited (Acorn Computers). The product described in this manual and products for use with it, are subject to continuous developments and improvement. All information of a technical nature and particulars of the product and its use (including the information in this manual) are given by Acorn Computers in good faith. In case of difficulty please contact your supplier. Deficiencies in software and documentation should be notified in writing, using the Acorn Scientific Fault Report Form to the following address: Sales Department Scientific Division Acorn Computers Ltd Fulbourn Road Cherry Hinton Cambridge CB1 4JN All maintenance and service on the product must be carried out by Acorn Computers' authorised agents. Acorn Computers can accept no liability whatsoever for any loss or damage caused by service or maintenance by unauthorised personnel. This manual is intended only to assist the reader in the use of the product, and therefore Acorn Computers shall not be liable for any loss or damage whatsoever arising from the use of any information or particulars in, or any error or omission in, this manual, or any incorrect use of the product. Published by Acorn Computers Limited, Fulbourn Road, Cherry Hinton, Cambridge CB1 4JN. Within this publication the term BBC is used as an abbreviation for the British Broadcasting Corporation. NOTE: A User Registration Card is supplied with the hardware. It is in your interest to complete and return the card. Please notify Acorn Scientific at the above address if this card is missing. ## Contents <table> <thead> <tr> <th>Section</th> <th>Title</th> <th>Page</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>Introducing the Acorn 32000 assembler</td> <td>1</td> </tr> <tr> <td>1.1</td> <td>Installation</td> <td>2</td> </tr> <tr> <td>1.2</td> <td>Assembler commands</td> <td>2</td> </tr> <tr> <td>1.3</td> <td>Assembler options</td> <td>3</td> </tr> <tr> <td>1.4</td> <td>Assembler listing format</td> <td>4</td> </tr> <tr> <td>2</td> <td>32000 assembler source format</td> <td>7</td> </tr> <tr> <td>2.1</td> <td>Format of source lines</td> <td>7</td> </tr> <tr> <td>2.2</td> <td>Character set</td> <td>8</td> </tr> <tr> <td>2.3</td> <td>Symbols</td> <td>8</td> </tr> <tr> <td>2.4</td> <td>Constants</td> <td>9</td> </tr> <tr> <td>2.5</td> <td>Expressions</td> <td>10</td> </tr> <tr> <td>2.6</td> <td>Mnemonic conventions</td> <td>11</td> </tr> <tr> <td>3</td> <td>Assembler directives</td> <td>13</td> </tr> <tr> <td>3.1</td> <td>Absolute and relocatable modes</td> <td>13</td> </tr> <tr> <td>3.2</td> <td>Standard directives</td> <td>13</td> </tr> <tr> <td>3.2.1</td> <td>GET</td> <td>14</td> </tr> <tr> <td>3.2.2</td> <td>CHAIN</td> <td>14</td> </tr> <tr> <td>3.2.3</td> <td>EQU</td> <td>14</td> </tr> <tr> <td>3.2.4</td> <td>EQUR</td> <td>15</td> </tr> <tr> <td>3.2.5</td> <td>SET</td> <td>15</td> </tr> <tr> <td>3.2.6</td> <td>NLSYM</td> <td>15</td> </tr> <tr> <td>3.2.7</td> <td>IF</td> <td>16</td> </tr> <tr> <td>3.2.8</td> <td>IFDEF and IFNDEF</td> <td>16</td> </tr> <tr> <td>3.2.9</td> <td>MACRO...MEND</td> <td>17</td> </tr> <tr> <td>3.2.10</td> <td>DCB</td> <td>21</td> </tr> <tr> <td>3.2.11</td> <td>DCS</td> <td>21</td> </tr> <tr> <td>3.2.12</td> <td>DCW</td> <td>21</td> </tr> <tr> <td>3.2.13</td> <td>DCD</td> <td>21</td> </tr> <tr> <td>3.2.14</td> <td>DCF</td> <td>22</td> </tr> <tr> <td>3.2.15</td> <td>DCL</td> <td>22</td> </tr> <tr> <td>3.2.16</td> <td>ALLOCB</td> <td>22</td> </tr> <tr> <td>3.2.17</td> <td>ALLOCW</td> <td>22</td> </tr> <tr> <td>3.2.18</td> <td>ALLOCD</td> <td>22</td> </tr> <tr> <td>3.2.19</td> <td>ALIGN</td> <td>23</td> </tr> <tr> <td>3.2.20</td> <td>ENTRY</td> <td>23</td> </tr> <tr> <td>3.2.21</td> <td>END</td> <td>23</td> </tr> </tbody> </table> 3.2.22 TITLE 3.2.23 OPTIONS 3.3 Object module directives 3.3.1 MODULE 3.3.2 AREADEF 3.3.3 AREA 3.3.4 AREALEN 3.3.5 AREAEND 3.3.6 EXPORT and EXPORTC 3.3.7 IMPORTC 3.3.8 IMPORT 3.3.9 HANDLER 3.3.10 SPECSB and DEFSB 3.3.11 ADDRESS 3.3.12 CDESC 3.3.13 LINKNO 3.4 Treatment of labels 1 Introducing the Acorn 32000 assembler This document is a reference guide to the Acorn 32000 macro assembler. It is not a tutorial guide, and therefore the reader is assumed to be familiar with: 32000 Assembly language. This is described in the Instruction Set Reference Manual which is available from dealers. Although the mnemonics used by the assembler are to the National standard, pseudo-operations (assembler directives) are specific to the Acorn Assembler. The Panos operating system. The use of the operating system environment under which the assembler runs is described in a document supplied with the system, the Panos Guide to Operations and the Panos Programmer's Reference Manual. The user is also assumed to know how to use the command to call the assembler. Acorn Object Format (AOF) is mentioned frequently in this guide; the output produced by the assembler will usually be in this form, although knowledge of the details is unlikely to be required. A full description of AOF is given in the Panos Technical Reference Manual. See also the various User Guides for introductory material. Features of the assembler include: - complete support of the NS32000 instruction set including Memory Management and Floating Point extensions. - support of all nine categories of the general addressing modes of the NS32000. - two types of object file: 1. an image in Acorn Object Format suitable for linking into a Panos relocatable image using the system linker. 2. a simple binary image suitable for immediate execution from the Pandora * prompt. - powerful macro defining capability. The user may define macro instructions in the source which may be called to insert common sequences of 32000 mnemonics or assembler directives. Macros may call other macros, and recursion is possible. - conditional assembly. The ability to assemble parts of the source conditionally is made even more useful by the ability to set ‘flag’ symbols on the command line so that different versions may be assembled from the same source file. 1.1 Installation The assembler is supplied on a 5¼ inch floppy disc in Acorn DFS format. This needs to be installed even if it is intended for use in conjunction with the DFS. Refer to the appropriate User Guide supplied with the hardware for details about installing the assembler. 1.2 Assembler commands This section summarises the arguments of the assembler command. See the beginning of chapter 2 for a breakdown of the metasyntax used here. (-source) filename (-asm) This names the source file to be assembled. The extension ‘-asm’ will be appended if no other extension is given. Multiple files may be assembled using the CHAIN directive. -list (name) An assembly listing may be sent to a file called source-lis, or to another named file or device. The format of the listing is described in section 1.4. See figure 1 for a demonstration of this option. -error (name) Assembly errors are reported to the initial error stream by default (i.e. the messages usually go to the screen). The ‘error’ argument names an alternative destination for errors. -aof (filename) The output from the assembler is put into a file source-aof by default. The ‘aof’ argument allows an alternative file to be named. Note that the file may not in fact be an AOF file, if the assembly was carried out in absolute or relative binary mode. -opt options Several options are provided to change the behaviour of the assembler. These are described in section 1.3 below. -get "mapping \{, mapping\}" This argument is used to specify a mapping between filenames specified in GET and CHAIN directives, and the actual filename to be used. The word 'get' is followed by a string in double quotes which is a comma-separated list of mappings from GET (or CHAIN) names to filenames. For example: -> asm32 -source fred -get "fpStuff=fp-asm,debug=db-asm" With this mapping, a "GET fpStuff" directive would access file fp-asm. -identify Specifying this argument causes the assembler to print its version number. -help- Specifying this argument causes the assembler to produce a summary of the arguments which may appear on the command line. 1.3 Assembler options The -opt argument is followed by a list of letters which are used to flag various options. A flag letter preceded by a `+` enables the option; a `-` sign disables. The exception is `$`, which is followed by the name of the symbol to be set or reset. Note that if the first option letter is preceded by a `-`, then the whole option string must be enclosed in double quotes e.g. -opt "-l-m". c Usually upper and lower case are treated as distinct characters in identifiers. Quoting opt +c causes cases to be equated upon reading each source line, so that fred and FRED are the same symbol. l Usually source files are loaded into store during the first pass (if there is enough space) to minimise disc accesses on subsequent passes. This occasionally causes the assembler to run out of room. Quoting opt -l will disable loading and thus prevent the no room error (unless there genuinely isn't enough memory for the assembly). By default the assembler will try to optimise the size of the output file by taking many passes over the source. Giving the opt -m option causes the assembler to make only enough passes to resolve symbol references, at the expense of producing non-optimally sized output code. This option only applies when absolute binary rather than AOF is generated. Usually the assembler produces 'packed' style AOF files. Quoting opt -p causes general format AOF files to be produced. This option is followed by a name to be set to TRUE. This name may be accessed in a conditional assembly (IF) directive in the source. For example, -opt $debug sets the symbol 'debug' to TRUE (-1). Following the name by a single quote, e.g. -opt $debug' sets the symbol to FALSE (0). As implied by the descriptions above, the default state of the options is: +LMP-C. ### 1.4 Assembler listing format An assembly listing is produced if required by giving the -list argument on the command line. It has the following format: ``` 111111 b1 b2 b3 b4 b5 b6 nnnn text..... ``` where: 111111 is the value of the location counter at the start of the code for the line, printed as a 6-digit hex number. nnnn is the source line number. b1..b6 are the byte values (in hex) of the generated codes. b1 is at the lowest address. Spaces are printed if less than 6 bytes were generated; Extra bytes are displayed on following lines in the form: ``` 111111 b7 b8 b9 etc. ``` Introducing the Acorn 32000 assembler with at most 6 bytes per line, and llllll being the address of the first byte on each extra line. Lines which came from a macro expansion in the source are marked with a + character at the start of the line. At the end of the assembly, the assembler sends the following statistics to the output stream, which is the vdu by default (only if the global string Program$Verbosity is set to greater than 1 - see the Panos Guide to Operations): - The number of errors detected. - The total size (in bytes) of the area(s). - The number of passes required Incorrect lines are echoed to both the listing file and the error file. Errors are reported using textual messages printed out before the failing line. Figure 1 gives an example of an error message from an assembly within the Panos editor. The source can be seen in the background, the assembly command appearing in the top window, with the error message contained in the lower window. ![Figure 1 Assembler error message](image-url) 2 32000 assembler source format The Acorn 32000 assembler accepts standard National Semiconductor instruction mnemonics, and in addition provides a full set of pseudo-mnemonics (assembler directives) and the ability to define macro instructions. A source program is a sequence of lines which may contain 32000 assembly language mnemonics, assembler directives, comments, or nothing at all. Within this document a meta-syntax is used to describe the syntax of assembler source lines. In this meta-syntax, the characters {, }, !, * and ' have special meanings: (x) means 0 or 1 occurrences of x (x)* means 0 or more occurrences of x (x|y) means 1 occurrence of x or 1 occurrence of y 'c' where c is a single reserved character, means the literal character c, i.e. any special meaning is disabled. If c is not a single character or not a reserved character then ' stands for itself. name is a syntax class-name (i.e. lower case text. Upper case text is used for literal items, e.g. MOVQD, END). All other symbols stand for themselves. 2.1 Format of source lines The format of a source line is: (label) [mnemonic (operand (,operand)*)] (;comment) If a label is present, it must start at the beginning of a source line. Any mnemonic must be preceded by at least one space. A comment may start at any position on the line; it is marked by a semi-colon and continues up to the end of the line. There must be at least one space between a mnemonic and any following operands, but no space need precede a comment. Operands are separated by commas and may contain spaces. These are ignored, except within string constants. Expressions are therefore allowed to contain blanks, which are ignored. However, spaces are not allowed in tags (see later), numeric constants, and compound symbols such as ">= . A source line may contain up to 255 characters. The assembler will stop with an error if more than this number of characters occur without a line-break, since this would suggest an erroneous source file, e.g. a file which is not a text file. 2.2 Character set The character set consists of letters (upper and lower case), digits, the underscore character (_) and other special characters. Upper and lower case are distinct, except in instruction mnemonics, directives, and macro names. The use of option c will cause the assembler to equate upper and lower case in identifiers. 2.3 Symbols Symbols consist of letters, digits and underscores, starting with a letter or underscore. Symbols are significant to 63 characters. A relocatable symbol is one which is defined as a label in a relocatable area. All other symbols are either absolute or external. Some symbols are reserved and so cannot be redefined. These are: R0, R1, .. R7, F0, F1, .. F7 TOS, EXTERNAL, FP, SP, SB, PC The MMU registers Mnemonics, e.g. END and MOVQ, are allowed as label names however, so lines such as: END END are allowed but not advisable. Note that the letters used in the option field of the string instructions (MOVS i, CMPS i etc.) and the SETCFG instruction are not reserved; they are marked by the fact that they appear in this specific context (inside square brackets). 2.4 Constants Integers may be given as unsigned decimal numbers or in the forms \#Xhhhh, \#Bbbbb, \#Odddd, for hexadecimal, binary and octal representations respectively. Note that the letters A through F used in hex numbers may occur in either upper or lower case. An alternative representation of hexadecimal numbers is the form :hhhh. All integers are interpreted as 32-bit quantities. Floating point constants are optionally signed and have an optional exponent. Examples are: \[1.00\] \[1.1\] \[-.1\] \[-1e4\] \[1.234E-1\] Floating point constants are allowed only in the DCF and DCL directives, and as floating point immediate operands. String constants are delimited by single quotes in the simple case, and double quotes to generate a counted-string form, i.e. the bytes in the text preceded by a byte containing the length of the text. The DCB directive (described later) accepts either form, creating the appropriate stream of bytes. Character constants are also valid in integer expressions, where their length is limited to 4 characters in the simple form, and 3 characters in the counted-string form, the length byte forming part of the value in the latter case. The value of a multi-character constant as an integer is calculated using the same store interpretation adopted by the 32000 architecture, i.e. the least significant byte is the byte at the lowest address, which is the leftmost character of a string, or the length byte in counted strings. Hence: \['A' = \#X41, 'A' = \#X4101, 'AB' = \#X4241, 'AB' = \#X424102 etc.\] Within string constants, the asterisk * is used as an escape character. If the character that follows is an N or n, then the actual byte value stored at the current position is determined by the value of the NLSYM option (see directive descriptions later); the default value is 10 (ASCII LF = NL). If the following one or two characters are valid hex digits, then the number they represent is planted as a byte value. This enables the simple insertion of control characters within strings. For example A*N generates the bytes #X41, #X0A; *03*FEA generates #X03, #XFE, #X41. If neither of these cases holds, the following character is planted without interpretation; hence a single asterisk is represented in a string as **. Because this mechanism allows the representation of the newline character in strings, it is forbidden for strings to cross line boundaries. The special symbol ‘$’ is used to stand for the program counter. For example: ``` BR $ ; infinite loop ``` ### 2.5 Expressions All expressions are calculated to 32 bits and overflow is ignored. Evaluation is ordered according to the priority below, and left-to-right for operators of the same precedence. Bracketed sub-expressions are evaluated first. The arithmetic and comparison operators treat their operands as signed quantities; the 6 operators in the latter group return TRUE (-1) or FALSE (0). <table> <thead> <tr> <th>Operator Priority</th> <th>Functions</th> </tr> </thead> <tbody> <tr> <td>8</td> <td>Unary minus</td> </tr> <tr> <td>8</td> <td>Bitwise complement (unary)</td> </tr> <tr> <td>7</td> <td>Logical left shift (0s shifted in from right)</td> </tr> <tr> <td>7</td> <td>Logical right shift (0s shifted in from left)</td> </tr> <tr> <td>6</td> <td>Bitwise AND</td> </tr> <tr> <td>6</td> <td>Bitwise OR</td> </tr> <tr> <td>6</td> <td>Bitwise exclusive OR</td> </tr> <tr> <td>5</td> <td>Multiply</td> </tr> <tr> <td>5</td> <td>Divide (as defined by QUOD instruction)</td> </tr> <tr> <td>5</td> <td>Remainder (Modulus - as defined by REMD instruction)</td> </tr> <tr> <td>4</td> <td>Subtract</td> </tr> <tr> <td>4</td> <td>Add</td> </tr> </tbody> </table> = 3 Equal-to <> 3 Not-equal-to < 3 Less-than * > 3 Greater-than * <= 3 Less-than-or-equal-to * >= 3 Greater-than-or-equal-to * ! 2 Conditional NOT && 1 Conditional AND | 1 Conditional OR The comparison operators marked * perform signed comparison. Note: the only operators which may have one or both operands relative (or external) are + and - (unary and binary). Relative + relative-relative evaluates to relative. In object module (AOF) mode, two relative operands must have the same relocation base (i.e. they must be defined as labels in the same area). ### 2.6 Mnemonic conventions The assembler accepts all standard National Semiconductor instruction mnemonics (as described in the *Cambridge Series Instruction Set Reference Manual*), including floating point unit (FPU) and memory management unit (MMU) instructions. The normal 32000 operand forms are accepted for all 'general' type operands, with the following conventions: - An expression on its own is normally treated as a code-area address and is assembled as a PC-relative operand (or (SB) or EXTERNAL when the assembler is in AOF mode). The type of the expression must match the current code-area type, i.e. an absolute expression will be faulted in a relocatable area. This rule also applies to branch-type operands, i.e. of 'disp' class. - Immediate mode operands are specified by preceding an absolute expression with the equals-sign =. In the case of floating point immediates, only a constant may follow the =, not an expression. - Absolute operands are specified by prefixing an absolute expression with the at-sign @. - Operands for a 'quick' type argument must be absolute expressions, optionally prefixed by an equals-sign. In addition, the following special cases are accepted, as shown by these examples: ``` MOVSW [U,B] ENTER [R0, R1, R3], 24 RESTORE [R0-R4]; (all registers between R0 and R4 inclusive) SETCFG [I] CMPSD [] ``` 3 Assembler directives This chapter describes the directives acted upon by the assembler. Most of these are general purpose and may occur anywhere in the source. Others are specific to the production of Acorn Object Format files and should only be used after a MODULE directive. 3.1 Absolute and relocatable modes At any time the assembler is in one of three modes - absolute, relocatable or AOF. The default mode is absolute. In any assembly, one (and only one) of the directives ABSORG, RELORG or MODULE may occur, at most once. If one does occur, it must be before any code or data has been generated, or any label defined, otherwise it is treated as an error. The form of these directives is: ``` ABSORG expression RELORG expression MODULE name ``` The value of the expression must be absolute, and defined by the time the directive is first encountered. It may not change between passes. The effect of the directive is to set the assembler into the specified mode, and the location counter to the value of the expression. The MODULE directive sets AOF mode, and the optional name is planted in the output file as the module name. Once in AOF mode the assembler will allow the special directives described in section 3.3. 3.2 Standard directives The following directives are handled by the assembler. As noted in the section on symbols, they may occur in either, or any combination of, upper or lower case, as may the names of user-defined macros and instruction mnemonics. plus MODULE directives described in section 3.3 The directives listed in the table are now described in turn, apart from ABSORG and RELORG detailed above. ### 3.2.1 GET **Syntax:** GET filename On encountering this, the assembler suspends processing of the current file and starts to read input from file 'filename' (or the file to which this name is mapped via the -get command line option. See section 1.2). The name should be enclosed in single quotes as above. On reaching the end of the file specified, processing resumes at the point in the first file where it had been suspended. GET may be used in a MACRO definition, but note that the GET operation happens when the macro is expanded, not when the body is read in. A file read in by this means may itself contain a GET directive, but the level to which this process may recurse is dependent upon the state of the Panos I/O environment, the limitations of the filing system involved, and the assembler itself which has a restriction of five levels. ### 3.2.2 CHAIN **Syntax:** CHAIN filename This is similar in effect to GET, except that the current file is closed and processing continues with the named file. This directive will commonly occur at the end of a source file, if the text of the program is too large to fit conveniently in a single file. CHAIN may not occur in a MACRO definition. ### 3.2.3 EQU **Syntax:** ident EQU expression Defines symbol 'ident' to have the value of the expression, which may be absolute or relocatable, but not external. 3.2.4 EQUR Syntax: \texttt{ident\ EQUR\ register\_name} Defines symbol 'ident' to be a synonym for the named integer or floating point register (which may itself have been defined by EQUR). 3.2.5 SET Syntax: \texttt{ident\ SET\ expression} This directive has the same effect as EQU, except that a symbol defined using SET may be redefined using SET, i.e. the identifier is an assembler 'variable'. 3.2.6 NLSYM Syntax: \texttt{NLSYM\ expression} This sets the value (in the range 0 to 255) to be planted on encountering the character pair \\texttt{\*N} in a string constant. The initial value is 10 (ASCII LF = NL). 3.2.7 IF Syntax: ```plaintext IF expression_1 section_1 ELIF expression_2 section_2 ELIF expression_3 section_3 ... ELSE section_n FI ``` The IF..ELIF..ELSE..FI construct is directly analogous to the same construct in high-level programming languages, but here is used to control the conditional assembly of different sections of code according to whether the expressions evaluate to TRUE (not 0) or FALSE (0). Note that these constructs may be nested, with the obvious interpretation, and that any or all of the ELIF and ELSE clauses may be omitted. The section which is assembled (if any) must be the same on all passes of the assembler. To ensure this, the expressions must be absolute, must not contain any forward references, and must not use any symbols set as labels or derived from labels. Note that if the compact code option has been disabled (using opt -m on the command line), the label restriction does not apply. IF constructions must not be split over source files or macros. 3.2.8 IFDEF and IFNDEF Syntax: ```plaintext IFDEF symbol_name IFNDEF symbol_name ``` The IFDEF and IFNDEF directives are alternatives to IF in the general conditional assembly constructs. If the named symbol has been defined on the current assembly pass by the time that an IFDEF directive is encountered, the effect is the same as that with “IF true-expression”. IFNDEF provides the converse effect - i.e. for when the symbol has NOT been defined on the current pass. 3.2.9 MACRO...MEND Syntax: MACRO macro_call_template macro_body ... ... ... MEND The directive MACRO introduces the definition of a textual macro. It appears by itself on a (possibly commented) source line. The macro_call_template looks like: [%label] macro_name [param_def [, param_def]*] The macro_name may be the same as an existing instruction, directive or macro. To access the old instruction from within the macro definition, it should be preceded by a @. For example, within a macro called MODULE, the MODULE directive must be written @MODULE. A param_def looks like: %param_tag [=default_value] A param_tag is defined as one of the following items: (a) an ordinary symbol name (b) a 1- or 2-digit decimal number (c) a single asterisk * The label field is optional, and if present must start at the beginning of the line. It has the same syntax as an ordinary symbol, preceded by a percent sign. The macro_name must be supplied - its syntax is that of ordinary symbols, with the exception that it will be recognised in any combination of upper and lower case. The parameter list, if present, follows after at least one space. There may be 0 or more parameter definitions. The parameter tags may be any combination of types (a) and (b) above, or 0 or more of type (a) followed by a single parameter of type (c). A parameter of type (c) is used for passing arbitrary lists of items. The default value of a parameter, if supplied, is a piece of text which will be treated as having been supplied in the actual call if that parameter was omitted. It has the same syntax as actual parameters (see below). Note that a parameter of type (c) may not take a default value. If, for a given parameter, no value is supplied at the call, and there is no default value, the parameter is treated as a null string "". When a macro using a type (c) parameter is called, it is as if the macro had a parameter list ending with the sequence %1, %2, %3, ... where the number of such parameters in the formal list matches the number of such parameters in the call. When a macro name is encountered during assembly, its call template is matched against the current line in the source file, and the parameters assigned, with appropriate defaults. The label field is treated specially, in that if there is a label on the line containing the macro call, but there is no label in the macro call template, then the label is treated in the normal way and defined as a symbol whose value is that of the location counter at the start of the line. If however there is a label field in the macro template, then the text of the actual label is assigned to the label parameter and is not entered as a symbol at this point. Actual parameters are treated as uninterpreted textual information. All spaces in parameters are removed, except for those occurring within quotes. The text of the parameter is preserved in respect of the case of letters and the occurrence of special characters. To get a space into a parameter the whole parameter must be placed in quotes. Note that quote characters surrounding a parameter are stripped during processing - when the parameter is substituted during expansion the quotes will not appear. To get a quote character into a parameter, the parameter is enclosed in quotes and two quotes are used (as in string constants). Note that the contents of a quoted parameter are NOT treated as if it was a string - e.g. 'N' is not translated into a newline character. Assembly then continues with the source text being read out of the body of the macro, rather than from the source file. On encountering a '%%' character during reading of a macro, the assembler reads the next item which should be a param_tag as described above. It is an error for the item not to be a tag defined in the call of the current macro, unless the assembler is currently treating its input as a comment, in which case this error is ignored. In order to increase the usefulness of macros, it is possible to concatenate a parameter with a following piece of text which would otherwise be taken as part of the parameter name. If a parameter is followed by a full-stop (.) character, the assembler will ignore it but terminate its parsing of the parameter name. Hence if %T1='B', %dest='R0', %source='STEP(SB)' then: ``` MOVZ%T1.D %source, %dest ``` would expand to ``` MOVZBD R0, STEP(SB) ``` There is a special case of parameter instantiation which is related to type (c) parameters. If the parameter %* occurs within the macro body, it expands to the parameters %1, %2, %3.. separated by commas. In addition if %* is immediately followed by a 1- or 2-digit number (N say), it expands to parameter %N, %N + 1 etc. These forms are only valid in macros which have a %* type parameter in the call template. The item %#param expands to the number of characters in param as a textual string, which is useful for testing for null string parameters: ``` IF %#string = 0 ``` Also % (absexpr) converts absexpr into a decimal string representation of the value of the expression. The expression may not include forward references or macro parameters. However such an expression could be SET to a label before using % (label). In addition to normal parameters created on macro call, the assembler maintains two pseudo-parameters connected with macros. These are: %MCOUNT - Macro count. This parameter when substituted returns an integer (as text) which is the number of macro calls made so far on this pass, up to and including the point at which the macro currently being expanded was called. On each macro call, a global variable is incremented, and assigned to a (local) pseudo parameter which is entered into the symbol table with the other parameters for this macro. Hence within a given macro expansion, the value of %MCOUNT may be used to create local labels, if some simple naming convention is adopted. %PCOUNT - Parameter count. This expands to the number of numeric parameters which were created using a type (c) parameter in the current macro. Its main use (as for %*) is in the writing of recursive list-processing macros, which may handle an arbitrary number of parameters. Note that %PCOUNT does NOT include any normal type (a) parameters preceding the %*. It is an error to instantiate %PCOUNT unless a macro with a type (c) parameter is currently being expanded. Here are a few examples of the use of macros to demonstrate these points: MACRO MOV C %N, %dest ; move constant to double-word IF ((%N)>=-8) && ((%N)<=7) ; in range -8..7 MOV D %N, %dest ELIF ((%N)>=-#XC0000000) && ((%N)<=#X3FFFFFFF) ; OK in up to 30-bit ADDR \%N, %dest ELSE MOV D =%N, %dest FI MEND MACRO Case_Table %Type, %Base, %* IF %PCOUNT = 0 ; end of table ELSE DC%Type %1-%Base ; dump 1 element Case_Table %Type, %Base, %*2 ; and do the rest of the list FI MEND MACRO %Name ENUM %Base, %* ; enumerate list of names as constants. IF %PCOUNT > 0 %Name.%1 EQU %Base ; define this one %Name ENUM %Base+1, %*2 ; enumerate rest FI MEND Example of use: Colour ENUM 0, black, red, green, blue, white defines Colour_black as 0, Colour_red as 1, etc. 3.2.10 DCB Syntax: \texttt{DCB expression \{, expression\}^*} This directive causes the planting of bytes of data into store. Each expression is either an absolute integer expression, or a string expression. The null string '' is permitted, for which no bytes are planted. It is an error to plant an integer expression not in the range -128 to 255. The counted-string form may also be used by enclosing the characters to be planted in double quotes. This puts a length byte followed by the actual text. 3.2.11 DCS Syntax: \texttt{DCS expression \{, expression\}^*} This directive is identical in effect to DCB. 3.2.12 DCW Syntax: \texttt{DCW expression \{, expression\}^*} This directive causes the planting of words of data into store. Each expression must be absolute, and evaluate to an integer in the range -32768..65535. 3.2.13 DCD Syntax: \texttt{DCD expression \{, expression\}^*} This directive causes the planting of double-words into store. Each expression must be an absolute integer expression. 3.2.14 DCF Syntax: \(\text{DCF } \text{fpconst} [,\text{fpconst}]^*\) This causes the planting of list of single precision (four-byte) floating point constants in memory. 3.2.15 DCL Syntax: \(\text{DCL } \text{fpconst} [,\text{fpconst}]\) This acts as DCF but plants double precision (eight-byte) constants. 3.2.16 ALLOCB Syntax: \(\text{ALLOCB } \text{expression} [,\text{expression}]\) This directive reserves a number of bytes of store as determined by the value of the first expression (which must be \(\geq 0\)). The essential effect is to add the value of the expression (which must be absolute) to the location counter. The optional second parameter is the value which will be deposited in each byte of the reserved area. If it is omitted, the value of the allocated bytes will be undefined. 3.2.17 ALLOCW Syntax: \(\text{ALLOCW } \text{expression} [,\text{expression}]\) This is as for ALLOCB, but reserves store in units of words, i.e. it adds twice the value of the expression to the location counter. The optional second parameter is the value which will be deposited in each word of the reserved area. 3.2.18 ALLOCD Syntax: \(\text{ALLOCD } \text{expression} [,\text{expression}]\) This is as for ALLOCB, but reserves store in units of double-words, i.e. it adds four times the value of the expression to the location counter. The optional second parameter is the value which will be deposited in each double word of the reserved area. 3.2.19 ALIGN Syntax: ALIGN expression [, expression] This directive is used to set the location counter on an address boundary. The first expression given must evaluate to an absolute, positive quantity, N, which is a power of 2 (i.e. 2, 4, 8, 16 etc). The effect of the directive is to ensure that the location counter is positioned at the next address which is 0 mod N. This is achieved by planting between 0 and N-1 bytes, as padding. In place of the first expression, the words BYTE, WORD, DOUBLE, or QUAD may be used. These stand for 1, 2, 4, and 8 bytes respectively. The default value used for padding is 0, but if the second expression is supplied, then its value will be used - it must evaluate to an absolute integer in the range -128 to 255. For example in a code area the value #XA2 might be used - this is the one-byte machine instruction NOP. 3.2.20 ENTRY Syntax: ENTRY In AOF mode, this defines the entry point (at the current location) which is looked for by the linker to determine the 'root' module of an image. In absolute or relative mode, this sets the current location as the execution address of the binary output file produced. 3.2.21 END Syntax: END {expression} This directive serves two purposes: - When it is encountered during the processing of an included file (through the use of GET), the assembler closes that file and resumes processing the one it was reading from when the GET occurred. No expression may be present in this case. - If no GET was in progress then it causes the current pass of the assembler to complete. If the expression is present then it defines the entry point. This is an alternative to the use of ENTRY; only one of these mechanisms may be used in an assembly. It is a fatal error in the latter case for END to occur as a directive within the body of a macro. It is also incorrect for a final END to occur while there are open conditional assembly blocks (this implies that a FI has been missed out). 3.2.22 TITLE Syntax: TITLE text This directive is followed by a string which is subsequently printed at the top of each page of assembly listing. In addition the directive causes the listing to move to the top of the next page (a form-feed is sent to the listing file). An example is: TITLE Low-level graphics support routines. 3.2.23 OPTIONS Syntax: {label} OPTIONS {={+|-}value}* where label is an optional label, and value is one of: IFS Conditional assembly directives LIST Global listing (outside of MACRO and IF) MDEF Macro definitions MEXP Macro expansion SKIP Code skipped by IF If the value is preceded by +, the class of item controlled by that word is enabled in the listing. If it is preceded by -, that part of the listing is disabled. If = (or nothing) precedes the value, the listing will contain only items of that class. Examples are: OPTIONS LIST ; Only the global listing OPTIONS -MDEF ; Turn off macro definition listings OPTIONS +SKIP ; List code skipped in IFS Assembler directives If the label is present, it is assigned (as with SET) the previous value of the OPTIONS, for use in a later directive, e.g.: ``` oldopt OPTIONS -LIST ; force listing off .... .... OPTIONS oldopt ; restore previous state ``` If OPTIONS -LIST is used at the first line of a GET macro library file, and OPTIONS +LIST used on the last line, then no part of the library file will appear in the listing. 3.3 Object module directives As mentioned at the start of this chapter, the MODULE directive is used to put the assembler in AOF mode. This section describes the directives which are used in this mode. 3.3.1 MODULE Syntax: ``` MODULE ([<name> | name]) ``` This directive defines the external name to be given to the module. It must occur at most once in the assembly, obeying the same positioning rules as ABSORG and RELORG. It overrides the -m option since compact code is always produced. The name is optional and only has to be enclosed in quotes if it contains a semi-colon, or multiple, or leading spaces. 3.3.2 AREADEF Syntax: ``` AREADEF namedef [(attribute {, attribute}*), alignment ``` This allows the user to create and specify the attributes of an area of store into which code and/or data may be planted using the assembler's normal mechanisms. The namedef parameter has the syntax: where the second form is used to permit an arbitrary, externally visible name, but is restricted to use with areas marked as \texttt{COMMON} or \texttt{COMDEF} (see below). The alignment parameter is either an absolute expression which must be a power of 2, or one of the keywords \texttt{BYTE}, \texttt{WORD}, \texttt{DOUBLE}, or \texttt{QUAD} (standing for 1, 2, 4, and 8 respectively). These keywords are recognised only in this context and in the \texttt{ALIGN} directive. Note that in AOF mode, the \texttt{ALIGN} directive will only accept alignment values smaller than or equal to that specified in the \texttt{AREADEF} directive for the area in which it is used. The attribute keywords which may be present are defined below. They fall into five groups; at most one keyword from each group may occur in the list (in any order). If no keyword from a given group occurs, the first keyword in that group is the default and will be assumed present. Keywords are recognised in any combination of upper and lower case letters. \textbf{DATA / CODE} Defines the use to which information in this area will be put. At most one area may be defined as being a code area. \textbf{WRITE / READ} \texttt{READ} specifies that the area should be protected against write access (if possible - this depends on the presence of the MMU). \texttt{WRITE} indicates that the area must be made writeable. \textbf{NOPIC / PIC} \texttt{PIC} stands for Position Independent Code. When applied to a code area it indicates that this area contains such code (i.e. re-entrant, pure, and containing no relocation). When applied to a data area it causes the assembler to fault any attempt to generate a relocated object in the area. \textbf{PRIVATE / SHARED} Specifying \texttt{SHARED} allows the linker to arrange run-time sharing of the area across different processes using this module. Otherwise the area will only be accessible within a single process space. CONTIG / COMMON / COMDEF Defines whether this area will be contiguous with other areas of the same type (CONTIG), or overlap them (COMMON and COMDEF). COMDEF indicates that this module defines the common area named, rather than simply referencing it. For COMMON and COMDEF, the area name may have a different external name, as mentioned above. Use of this directive also defines a relocatable symbol of the same name as the area at the start of the area (offset 0). Associated with each area is a location pointer. The only way in which this is changed is by dumping items (including instructions) or allocating space (using ALLOCi, ALIGN etc.) while the area is currently selected. There is one predefined area. The attributes of this area are as if it had been declared by: ``` AREADEF, [code], byte ; null name special to this area ``` This area is special in that although it is marked as the code area, this may be overridden by the user declaring another area to be the code area. This area is the one which is selected at the start of each assembly pass. 3.3.3 AREA Syntax: `AREA (name)` This directive selects the area into which items will be dumped in the normal way (i.e. by creating instructions or data). The parameter, if present, must be the name of an already declared area. The effect of the command is to set the location counter to the last-reached point within the named area (which is 0 if the area has not been previously selected). If no name is given then the default area is selected. 3.3.4 AREALEN Syntax: `label AREALEN name [, offset]` This assigns the length of the named area to the label. If the offset parameter is present, this is added to the length before the label is assigned. The label may be used only in a general operand or ADDRESS directive, as EXTERNAL mode is used to refer to it. 3.3.5 AREAEND Syntax: \label AREAEND name [, offset] This acts as AREALEN except that the end (plus optional offset) address of the area is assigned to label. 3.3.6 EXPORT and EXPORTC Syntax: \begin{itemize} \item EXPORT namedef [, namedef]* \item EXPORTC namedef [, namedef]* \end{itemize} These two directives are used to make symbols external. The syntax of namedef is that given in the description of AREADEF. EXPORT defines a symbol as being data or absolute, according to whether it was defined by the use of EQU (absolute) or as a label (data). EXPORTC defines each given parameter as an external code item - it must have been defined as a label in the code area. At most one EXPORTC or EXPORT command may be applied to any particular name. The namedef syntax allows external names to be completely general. These directives must not be labelled. 3.3.7 IMPORTC Syntax: \begin{itemize} \item IMPORTC namedef [, namedef]* \end{itemize} IMPORTC defines a symbol as being an external code item descriptor, i.e. one which may be used as the operand in a CXP instruction. It takes a list of namedef parameters, i.e. identifiers with an optional equivalence string. Each name given is defined here, and must not be defined in any other way. A name so defined will normally be used as a CXP operand, but may be used in the context of a general operand, in which case external addressing mode will be generated (this would generally be meaningful only as the first operand in an ADDR instruction). There must be no label on a line containing this directive. An extended form of namedef is permitted in this context: \begin{itemize} \item \text{int\_name}=\{'mod'\}'ext\_name' \end{itemize} where \text{int\_name} is the internal name of the imported item, i.e. the one to be used in this assembly, \text{mod} is the (optional) name of the module from which the symbol is to be imported, and ext_name is the external name of the item to be imported. ### 3.3.8 IMPORT **Syntax:** ``` IMPORT namedef [, namedef]* ``` This directive is similar to IMPORTC, but each external symbol so referenced will be set up in the link table by address (or by value, if the symbol is defined as a constant) rather than as a code descriptor. Names so defined may be used only as general operands, and will generate external addressing mode. (Note that it is possible to specify an offset from such a symbol, as part of this mode). The assembler does not create a link entry for these symbols immediately, but on the first time a symbol is used as an operand; hence external symbols which are not used, or are used only in an ADDRESS directive (or in CDESC after an IMPORTC) will not needlessly take up a link table entry. This directive must not have a label. The extended form of namedef is also allowed here (see IMPORTC). ### 3.3.9 HANDLER **Syntax:** ``` HANDLER ``` This directive must be unlabelled and marks the entry point of the Panos condition handler code for the module being assembled at the current location. It must be in the code area. See the *Panos Programmer's Reference Manual* for further details on Condition Handler. ### 3.3.10 SPECSB and DEFSB **Syntax:** ``` DEFSB position SPECSB position ``` One of these directives is used to define the location of the static base for this module. The parameter may be one of three types: 1. an absolute expression 2. an expression evaluating to an address in some area 3. the name of an IMPORT ed symbol (+ offset) Case 1 is used if the static base (SB) should point at some absolute store address. Case 2 is used to set the SB within any declared area. Case 3 is used to set the SB to be relative to the global symbol named. `DEFSB` may occur at most once in an assembly. If SB-relative addressing is used at all, the SB will normally be defined in this way. If case (b) is used, the assembler will optimise code references to labels in the area in which the SB is defined, so that they use SB-relative addressing, rather than external mode. If the statement: ``` MOVD 6(ABC),R0 ``` is encountered, and if ABC is in the SB area, the assembler will generate `X(Y(SB))` addressing. This optimisation is disabled, if necessary, by the use of `SPECSB` instead of `DEFSB`. These directives may not be labelled. ### 3.3.11 ADDRESS **Syntax:** ``` ADDRESS expression (, expression)* ``` This directive is similar in format to `DCD`, but causes the assembler to generate (link-time) relocatable objects, rather than constant ones. An expression is one of: - A relocatable address - A symbol defined using `IMPORT`, i.e. an external name (+ offset). - An absolute expression The assembler generates commands in the output file which instruct the linker to relocate each doubleword when the address of the item is known. The last type of item needs no relocation, but may occur here for convenience - the effect is as for `DCD`. ### 3.3.12 CDESC **Syntax:** ``` CDESC expression (, expression)* ``` `CDESC` creates a code descriptor for a local or external code item. If the parameter is a label in the code area, then a local code descriptor will be created; otherwise it must be the name of a symbol appearing in a preceding `IMPORTC` directive. In either case, the assembler generates commands in the output file to relocate the item at link time. 3.3.13 LINKNO Syntax: \( \text{label \ LINKNO \ symbol} \) This assigns the link table number allocated to symbol to label. symbol must be an external, and must have been defined in an IMPORT or IMPORTC directive prior to the use of LINKNO. 3.4 Treatment of labels The following points should be borne in mind regarding the assembler's treatment of labels when producing AOF output: 1. The occurrence of a label reference to an area other than the code area generates external-mode addressing, unless the area contains the static base, when SB-relative addressing is generated. This of course applies only in the context of general operands within the code area - it is illegal to reference such labels as PC-relative operands, e.g. BSR or BR targets. 2. CXP may take three types of operand: - The name of a symbol defined using IMPORTC. This generates a standard external entry. - A label in the code area. This causes the assembler to set up a local code descriptor entry in the link table for the label concerned. - EXTERNAL (absexpr). A reference to the external object identified by the given link table number. This should have been determined by the LINKNO directive.
{"Source-Url": "http://chrisacorns.computinghistory.org.uk/docs/Acorn/Sci/AcornScientific_32000Assembler.pdf", "len_cl100k_base": 12411, "olmocr-version": "0.1.53", "pdf-total-pages": 40, "total-fallback-pages": 0, "total-input-tokens": 65810, "total-output-tokens": 13869, "length": "2e13", "weborganizer": {"__label__adult": 0.00029540061950683594, "__label__art_design": 0.00041604042053222656, "__label__crime_law": 0.0002617835998535156, "__label__education_jobs": 0.0008192062377929688, "__label__entertainment": 6.461143493652344e-05, "__label__fashion_beauty": 0.0001659393310546875, "__label__finance_business": 0.00034928321838378906, "__label__food_dining": 0.0002015829086303711, "__label__games": 0.0008687973022460938, "__label__hardware": 0.01462554931640625, "__label__health": 0.00013065338134765625, "__label__history": 0.00020039081573486328, "__label__home_hobbies": 0.0002532005310058594, "__label__industrial": 0.0015401840209960938, "__label__literature": 0.00015151500701904297, "__label__politics": 0.00017881393432617188, "__label__religion": 0.0004703998565673828, "__label__science_tech": 0.01702880859375, "__label__social_life": 4.6372413635253906e-05, "__label__software": 0.049468994140625, "__label__software_dev": 0.91162109375, "__label__sports_fitness": 0.0001722574234008789, "__label__transportation": 0.0003314018249511719, "__label__travel": 0.00012153387069702148}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50068, 0.04536]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50068, 0.55376]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50068, 0.87067]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 1749, false], [1749, 4013, null], [4013, 4324, null], [4324, 6017, null], [6017, 7692, null], [7692, 9432, null], [9432, 10878, null], [10878, 11902, null], [11902, 11902, null], [11902, 13562, null], [13562, 15070, null], [15070, 16780, null], [16780, 18641, null], [18641, 20248, null], [20248, 20572, null], [20572, 22063, null], [22063, 23596, null], [23596, 24219, null], [24219, 25696, null], [25696, 27145, null], [27145, 29679, null], [29679, 31647, null], [31647, 32863, null], [32863, 33999, null], [33999, 35365, null], [35365, 36938, null], [36938, 38440, null], [38440, 39771, null], [39771, 41720, null], [41720, 43557, null], [43557, 45423, null], [45423, 47039, null], [47039, 48878, null], [48878, 50068, null], [50068, 50068, null], [50068, 50068, null], [50068, 50068, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 0, null], [0, 0, null], [0, 1749, true], [1749, 4013, null], [4013, 4324, null], [4324, 6017, null], [6017, 7692, null], [7692, 9432, null], [9432, 10878, null], [10878, 11902, null], [11902, 11902, null], [11902, 13562, null], [13562, 15070, null], [15070, 16780, null], [16780, 18641, null], [18641, 20248, null], [20248, 20572, null], [20572, 22063, null], [22063, 23596, null], [23596, 24219, null], [24219, 25696, null], [25696, 27145, null], [27145, 29679, null], [29679, 31647, null], [31647, 32863, null], [32863, 33999, null], [33999, 35365, null], [35365, 36938, null], [36938, 38440, null], [38440, 39771, null], [39771, 41720, null], [41720, 43557, null], [43557, 45423, null], [45423, 47039, null], [47039, 48878, null], [48878, 50068, null], [50068, 50068, null], [50068, 50068, null], [50068, 50068, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50068, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50068, null]], "pdf_page_numbers": [[0, 0, 1], [0, 0, 2], [0, 0, 3], [0, 1749, 4], [1749, 4013, 5], [4013, 4324, 6], [4324, 6017, 7], [6017, 7692, 8], [7692, 9432, 9], [9432, 10878, 10], [10878, 11902, 11], [11902, 11902, 12], [11902, 13562, 13], [13562, 15070, 14], [15070, 16780, 15], [16780, 18641, 16], [18641, 20248, 17], [20248, 20572, 18], [20572, 22063, 19], [22063, 23596, 20], [23596, 24219, 21], [24219, 25696, 22], [25696, 27145, 23], [27145, 29679, 24], [29679, 31647, 25], [31647, 32863, 26], [32863, 33999, 27], [33999, 35365, 28], [35365, 36938, 29], [36938, 38440, 30], [38440, 39771, 31], [39771, 41720, 32], [41720, 43557, 33], [43557, 45423, 34], [45423, 47039, 35], [47039, 48878, 36], [48878, 50068, 37], [50068, 50068, 38], [50068, 50068, 39], [50068, 50068, 40]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50068, 0.08889]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
361bd185a9b443df2cf22da6ebe70c5df7cb96c0
[REMOVED]
{"Source-Url": "https://kar.kent.ac.uk/31860/1/11-middleware-safeweb.pdf", "len_cl100k_base": 11027, "olmocr-version": "0.1.50", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 54564, "total-output-tokens": 13364, "length": "2e13", "weborganizer": {"__label__adult": 0.0004291534423828125, "__label__art_design": 0.00031185150146484375, "__label__crime_law": 0.0008568763732910156, "__label__education_jobs": 0.0006561279296875, "__label__entertainment": 6.29425048828125e-05, "__label__fashion_beauty": 0.0001665353775024414, "__label__finance_business": 0.0004270076751708984, "__label__food_dining": 0.000339508056640625, "__label__games": 0.0005016326904296875, "__label__hardware": 0.0009002685546875, "__label__health": 0.0015621185302734375, "__label__history": 0.0002008676528930664, "__label__home_hobbies": 8.273124694824219e-05, "__label__industrial": 0.0003330707550048828, "__label__literature": 0.00019443035125732425, "__label__politics": 0.00026726722717285156, "__label__religion": 0.00028204917907714844, "__label__science_tech": 0.0265655517578125, "__label__social_life": 9.250640869140624e-05, "__label__software": 0.013885498046875, "__label__software_dev": 0.951171875, "__label__sports_fitness": 0.00028061866760253906, "__label__transportation": 0.0003752708435058594, "__label__travel": 0.00018143653869628904}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60160, 0.02489]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60160, 0.26343]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60160, 0.89358]], "google_gemma-3-12b-it_contains_pii": [[0, 1324, false], [1324, 3920, null], [3920, 7282, null], [7282, 10455, null], [10455, 13698, null], [13698, 16626, null], [16626, 19924, null], [19924, 23341, null], [23341, 25356, null], [25356, 28744, null], [28744, 31769, null], [31769, 34156, null], [34156, 36700, null], [36700, 40166, null], [40166, 43326, null], [43326, 45400, null], [45400, 48364, null], [48364, 50986, null], [50986, 54248, null], [54248, 56872, null], [56872, 59399, null], [59399, 60160, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1324, true], [1324, 3920, null], [3920, 7282, null], [7282, 10455, null], [10455, 13698, null], [13698, 16626, null], [16626, 19924, null], [19924, 23341, null], [23341, 25356, null], [25356, 28744, null], [28744, 31769, null], [31769, 34156, null], [34156, 36700, null], [36700, 40166, null], [40166, 43326, null], [43326, 45400, null], [45400, 48364, null], [48364, 50986, null], [50986, 54248, null], [54248, 56872, null], [56872, 59399, null], [59399, 60160, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60160, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60160, null]], "pdf_page_numbers": [[0, 1324, 1], [1324, 3920, 2], [3920, 7282, 3], [7282, 10455, 4], [10455, 13698, 5], [13698, 16626, 6], [16626, 19924, 7], [19924, 23341, 8], [23341, 25356, 9], [25356, 28744, 10], [28744, 31769, 11], [31769, 34156, 12], [34156, 36700, 13], [36700, 40166, 14], [40166, 43326, 15], [43326, 45400, 16], [45400, 48364, 17], [48364, 50986, 18], [50986, 54248, 19], [54248, 56872, 20], [56872, 59399, 21], [59399, 60160, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60160, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
98f1bba206addf574e6ad64f93983ceec3386ef4
Scripting languages and frameworks Henglein, Fritz; Jhala, Ranjit; Krishnamurthi, Shriram; Thiemann, Peter DOI: 10.4230/DagRep.4.6.84 Publication date: 2014 Document Version Publisher's PDF, also known as Version of record Citation for published version (APA): Abstract This report documents the program and the outcomes of Dagstuhl Seminar 14271 “Scripting Languages and Frameworks: Analysis and Verification”. The seminar brought together a broad spectrum of researchers working on the semantics, analysis and verification of scripting languages. In addition to talks describing the latest problems and research on the key issues, split roughly into four overarching themes: semantics, types, analysis, contracts, languages, and security, the seminar had breakout sessions devoted to crosscutting topics that were of broad interest across the community, including, how to create shared analysis infrastructure, how to think about the semantics of contracts and blame, and the role of soundness in analyzing real world languages, as well as several “tutorial” sessions explaining various new tools and techniques. Seminar July 1–4, 2014 – http://www.dagstuhl.de/14271 1998 ACM Subject Classification D.3.3 Programming Languages, F.3.1 Logics and Meanings of Programs Keywords and phrases Scripting Languages, Frameworks, Contracts, Types, Analysis, Semantics 1 Executive Summary Fritz Henglein Ranjit Jhala Shriram Krishnamurthi Peter Thiemann License © Creative Commons BY 3.0 Unported license © Fritz Henglein, Ranjit Jhala, Shriram Krishnamurthi, and Peter Thiemann In the past decade scripting languages have become more mature: the wild experimentation and almost wilful embrace of obfuscation by Perl has been replaced by the level-headed simplicity of Python and the embrace of programming language research roots by Ruby. As a result, these languages have moved into the mainstream: every Web user relies on JavaScript. The Challenges of Scripting Languages Though scripting languages have become more mature, from the perspective of building robust, reliable software, they still suffer from several distinct problems, each of which creates new challenges for the research community. While these languages have textual definitions, they lack more formal descriptions, and in practice the textual “definitions” are themselves often in conflict with the normative nature of the implementations. This is in contrast to languages like Standard ML where the formal definition comes first. How far can we go in creating formal semantics from a combination of implementations and textual documents? - Tests – more than either implementations, textual definitions, or formal semantics – are becoming the norm for specification. For instance, the latest JavaScript standard explicitly embraces testing by publishing and regularly updating a conformance suite. Similarly, a team trying to create an alternate implementation of one of these languages may read the definition but what they really aspire to match is the test suite behavior. How can we support test suites as a new avenue of programming language specification? - One of the reasons programmers find these languages enjoyable (initially) is that they offer a variety of “convenient” features, such as overloading. As programs grow, however, understanding the full – and unintended! – behaviors of programs becomes a non-trivial effort. How can we design semantics and static and dynamic tools that can cope with the heavily understated and overloaded behaviors that make scripting languages attractive? - Programmers increasingly do not program in languages but in high-level frameworks built atop them. For instance, though “Ruby” is popular for Web programming, programmers rarely write Web applications directly in Ruby, but rather atop the higher-level Ruby on Rails platform. The result of imposing significantly higher-level interfaces is that they necessitate new reasoning modes. For instance, while the jQuery library is a pure JavaScript program, type-checking jQuery as if it were “merely” JavaScript would produce types that are both unreadably complex and relatively useless. Can we build custom reasoning at the level of the frameworks, then we can provide views of these frameworks that are consistent with the level at which developers think of them, and can we check that the implementations adhere to these interfaces? - These languages and frameworks are themselves not enough. They all reside in an eco-system of a family of other languages and frameworks whose interdependencies are necessary for proper understanding of program execution. For instance, in the client-side Web, JavaScript – which has gotten significant attention from the research community – only runs in response to stimuli, which are obtained from the DOM. In turn, the DOM and JavaScript both depend on the style-sheets written in CSS. But in fact all three of these components – the JavaScript code, the CSS styling, and the DOM events – all depend on one another, because almost any one can trigger or modify the other. Can we construct suitable abstractions such that each language can meaningfully talk about the others without importing an overwhelming amount of detail? This seminar brought together a wide variety of researchers working on the above questions. The seminar was organized into a series of short and long talks on topics related to the above overarching questions, and four breakout sessions focussing on broader questions and challenges. Next, we briefly summarize the talks and sessions. The contributed talks focussed on the following over arching themes – semantics, type systems, program analysis, contracts, languages and security. Executive Summary Fritz Henglein, Ranjit Jhala, Shriram Krishnamurthi, and Peter Thiemann Overview of Talks: Semantics Python, the Full Monty Joe Gibbs Politz ................................................................. 89 An Executable Formal Semantics of PHP Daniele Filaretti ............................................................... 89 JSCert, a two-pronged approach to JavaScript formalization Alan Schmitt ................................................................. 89 Overview of Talks: Type Systems Progressive Types Joe Gibbs Politz ................................................................. 90 Safe TypeScript Panagiotis Vekris .............................................................. 90 Confined Gradual Typing Éric Tanter ................................................................. 90 Typing Scheme to Typing Racket Sam Tobin-Hochstadt ......................................................... 91 Type Systems for JavaScript: Variations on a Theme Benjamin Lerner .............................................................. 91 Flow Typing Arjun Guha ................................................................. 92 Types for Ruby Jeffrey Foster ................................................................. 92 Refinement Types for an Imperative Scripting Language Panagiotis Vekris .............................................................. 92 Late Typing for Loosely Coupled Recursion Ravi Chugh ................................................................. 93 Overview of Talks: Program Analysis Abstract Domains for Analyzing Hash Tables Matthew Might ................................................................. 93 Static Analysis for Open Objects Arlen Cox ................................................................. 93 Soft Contract Verification David van Horn ............................................................... 94 Type Refinement for Static Analysis of JavaScript Ben Weidermann .............................................................. 94 Dynamic Determinacy Analysis Manu Sridharan ............................................................. 95 Performance Analysis of JavaScript Manu Sridharan ................................................. 95 Checking Correctness of TypeScript Interfaces for JavaScript Libraries Anders Møller .................................................. 95 Analyzing JavaScript Web Applications in the Wild (Mostly) Statically Sukyoung Ryu .................................................. 96 Overview of Talks: Contracts Membranes as Ownership Boundaries Tom Van Cutsem .............................................. 96 TreatJS: Higher-Order Contracts for JavaScript Matthias Keil .................................................. 96 Contracts for Domain-Specific Languages in Ruby Jeffrey Foster .................................................. 97 Overview of Talks: Languages HOP: A Multi-tier Language For Web Applications Tamara Rezk .................................................. 97 Perl: The Ugly Parts Matthew Might ................................................ 98 So, What About Lua? Roberto Ierusalimschy ....................................... 98 Regular Expression Parsing Bjorn Bugge Grathwohl ...................................... 98 HTML5 Parser Specification and Automated Test Generation Yasuhiko Minamide .......................................... 99 AmbientTalk: a scripting language for mobile phones Tom Van Cutsem .............................................. 99 Glue Languages Arjun Guha .................................................... 99 Overview of Talks: Security Information Flow Control in WebKit’s JavaScript Bytecode Christian Hammer ........................................... 100 Hybrid Information Flow monitoring against Web tracking Thomas Jensen ............................................... 100 Intrusion Detection by Control Flow Analysis Arjun Guha .................................................... 101 Multiple Facets for Dynamic Information Flow Cormac Flanagan .............................................. 101 Shill: shell scripting with least authority Christos Dimoulas ........................................... 102 Hybrid Information Flow Analysis for JavaScript Tamara Rezk ................................................. 102 A Collection of Real World (JavaScript) Security Problems: Achim D. Brucker ........................................... 102 Lightning Talks Reasoning about membranes using separation logic Gareth Smith .................................................. 103 Complexity Analysis of Regular Expression Matching Based on Backtracking Yasuhiko Minamide ........................................ 103 PHPEnkoder: a Wordpress Plugin Michael Greenberg ......................................... 103 SAST for JavaScript: A Brief Overview of Commercial Tools Achim D. Brucker ........................................... 104 Breakout Sessions Contracts and Blame Cormac Flanagan ............................................ 104 On the Role of Soundness Matthew Might, Jeffrey Foster ............................. 104 Metrics for Programming Tools Krishnamurthi, Shriram; Politz, Joe Gibbs ............. 105 JavaScript Analysis and Intermediate Representation Thomas Jensen ............................................ 105 Participants .................................................. 107 3 Overview of Talks: Semantics 3.1 Python, the Full Monty Joe Gibbs Politz (Brown University – US) We present a small-step operational semantics for the Python programming language. We present both a core language for Python, suitable for tools and proofs, and a translation process for converting Python source to this core. We have tested the composition of translation and evaluation of the core for conformance with the primary Python implementation, thereby giving confidence in the fidelity of the semantics. We briefly report on the engineering of these components. Finally, we examine subtle aspects of the language, identifying scope as a pervasive concern that even impacts features that might be considered orthogonal. 3.2 An Executable Formal Semantics of PHP Daniele Filaretti (Imperial College London, GB) We describe the first executable formal semantics of a substantial core of PHP – validated by testing against the Zend Test suite. 3.3 JSCert, a two-pronged approach to JavaScript formalization Alan Schmitt (INRIA Bretagne Atlantique – Rennes, FR) JSCert is a formalization of JavaScript that aims at being as close as possible to the specification while having an executable component to run against test suites. 4 Overview of Talks: Type Systems 4.1 Progressive Types Joe Gibbs Politz (Brown University – US) License © Creative Commons BY 3.0 Unported license 联合工作 © Joe Gibbs Politz As modern type systems grow ever richer, it can become increasingly onerous for programmers to satisfy them. However, some programs may not require the full power of the type system, while others may wish to obtain these rich guarantees incrementally. In particular, programmers may be willing to exploit the safety checks of the underlying runtime system as a substitute for some static guarantees. Progressive types give programmers this freedom, thus creating a gentler and more flexible environment for using powerful type checkers. In this paper we discuss the idea, motivate it with concrete, real-world scenarios, then show the development of a simple progressive type system and present its (progressive) soundness theorem. 4.2 Safe TypeScript Panagiotis Vekris (University of California – San Diego, US) License © Creative Commons BY 3.0 Unported license 联合工作 © Panagiotis Vekris Safe TypeScript is a gradual type system built on top of the TypeScript compiler framework that achieves type soundness by means of stricter static typing rules and a runtime mechanism for checks lying on the boundary between static and dynamic types. Safe TypeScript is geared towards efficiency: it uses differential subtyping, whereby only a minimum amount of runtime annotations are applied; and provides an erasure modality, which enables selective deletion of type annotations for type constructs that are meant to be dealt with entirely statically. The implemented Safe TypeScript compiler has been successfully used on hundreds of lines of existing TypeScript code, incurring with a modest overhead on sufficiently annotated input code. 4.3 Confined Gradual Typing Éric Tanter (University of Chile, CL) License © Creative Commons BY 3.0 Unported license 联合工作 © Éric Tanter Gradual typing combines static and dynamic typing flexibly and safely in a single programming language. To do so, gradually typed languages implicitly insert casts where needed, to ensure at runtime that typing assumptions are not violated by untyped code. However, the implicit nature of cast insertion, especially on higher-order values, can jeopardize reliability and efficiency: higher-order casts can fail at any time, and are costly to execute. We propose Confined Gradual Typing, which extends gradual typing with two new type qualifiers that let programmers control the flow of values between the typed and the untyped worlds, and thereby trade some flexibility for more reliability and performance. We formally develop two variants of Confined Gradual Typing that capture different flexibility/guarantee tradeoffs. We report on the implementation of Confined Gradual Typing in Gradualtalk, a gradually-typed Smalltalk, which confirms the performance advantage of avoiding unwanted higher-order casts and the low overhead of the approach. 4.4 Typing Scheme to Typing Racket Sam Tobin-Hochstadt (Indiana University – Bloomington, US) We have extended Typed Racket extensively to include support for features that go beyond traditional Scheme, including first-class classes, delimited continuations, mixins, etc. 4.5 Type Systems for JavaScript: Variations on a Theme Benjamin Lerner (Brown University, US) When JavaScript programmers write code, they often target not just the base language but also libraries and API frameworks that drastically change the style of their programs, to the point where they might well be considered as written in domain-specific languages rather than merely JS. Accordingly, the characteristic bugs for such applications varies by domain, and so any tools designed to help developers catch these bugs ought to be tailored to the domain. Yet these tools likely share a common core, since the underlying language is still JS. We present a TeJaS, a framework for designing type systems for JavaScript that can be customized to analyze the idiomatic errors of various domains, and we illustrate its utility by describing systems for analyzing DOM-access errors in jQuery programs, and privacy violations in Firefox browser extensions running in private-browsing mode. 4.6 Flow Typing Arjun Guha (University of Massachusetts – Amherst, US) Programs written in scripting languages employ idioms that confound conventional type systems. In this paper, we highlight one important set of related idioms: the use of local control and state to reason informally about types. To address these idioms, we formalize run-time tags and their relationship to types, and use these to present a novel strategy to integrate typing with flow analysis in a modular way. We demonstrate that in our separation of typing and flow analysis, each component remains conventional, their composition is simple, but the result can handle these idioms better than either one alone. 4.7 Types for Ruby Jeffrey Foster (University of Maryland, US) This talk summarizes several years of work on ways to bring some of the benefits of static typing to Ruby. We discuss Diamondback Ruby, a pure static type inference system for Ruby; an extension that does profiling to account for highly dynamic language features; the Mix system, which combines type checking and symbolic execution; and, briefly, RubyDust and rtc, which use the ideas of Mix to provide type inference and checking, respectively, at run time for Ruby. 4.8 Refinement Types for an Imperative Scripting Language Panagiotis Vekris (University of California – San Diego, US) We present a refinement type checker for a scripting language employing various idioms of the JavaScript/TypeScript language family. Our type system consists of a base type system that includes, among others, object types, unions, intersection and higher order functions. On top of this base system lies our refinement type system whose language spans linear arithmetic and uninterpreted predicates. Subtyping on the base system is coercive and the casts added during base typechecking are expressed in the form of refinement type constraints along side value related constraints. These constraints are formulated into logical implications and are discharged by means of Liquid Types inference/checking. Examples outlined in this presentation include safe downcasts based on reflection and in-bounds array accesses. 4.9 Late Typing for Loosely Coupled Recursion Ravi Chugh (University of California – San Diego, US) Flexible patterns of mutual recursion can be encoded in scripting languages by defining component functions independently and then “tying the knot” either by mutation through the heap or explicitly passing around receiver objects. We present a mechanism called late typing to reason about such idioms. The key idea is to, first, augment function types with constraints that may not be satisfied when functions are defined and, second, to check that these constraints are satisfied by the time the functions are called. 5 Overview of Talks: Program Analysis 5.1 Abstract Domains for Analyzing Hash Tables Matthew Might (University of Utah, US) Hash-table-like abstractions pervade scripting languages as fundamental data structures. (Consider objects in JavaScript, dictionaries in Python and hashes in Ruby.) Attempts to model these abstractions with the same abstract domains used to model abstractions of objects in languages like Java (in which fields and methods are fixed upon allocation) breaks these domains so as to cause catastrophic loss in precision or unsoundness. This talk looks at what is required to retain soundness while more precisely modeling the flexible nature of these structures. 5.2 Static Analysis for Open Objects Arlen Cox (Colorado University – Boulder, US) In dynamic languages, objects are open – they support iteration over and dynamic addition/deletion of their attributes. Open objects, because they have an unbounded number of attributes, are difficult to abstract without a priori knowledge of all or nearly all of the attributes and thus pose a significant challenge for precise static analysis. To address this challenge, this talk presents the HOO (Heap with Open Objects) abstraction that can precisely represent and infer properties about open-object-manipulating programs without any knowledge of specific attributes. It achieves this by building upon a relational abstract domain for sets that is used to reason about partitions of object attributes. An implementation of the resulting static analysis is used to verify specifications for dynamic language framework code that makes extensive use of open objects, thus demonstrating the effectiveness of this approach. 5.3 Soft Contract Verification David van Horn (University of Maryland – College Park, US) Behavioral software contracts are a widely used mechanism for governing the flow of values between components. However, run-time monitoring and enforcement of contracts imposes significant overhead and delays discovery of faulty components to run-time. To overcome these issues, we present soft contract verification, which aims to statically prove either complete or partial contract correctness of components, written in an untyped, higher-order language with first-class contracts. Our approach uses higher-order symbolic execution, leveraging contracts as a source of symbolic values including unknown behavioral values, and employs an updatable heap of contract invariants to reason about flow-sensitive facts. We prove the symbolic execution soundly approximates the dynamic semantics and that verified programs can’t be blamed. The approach is able to analyze first-class contracts, recursive data structures, unknown functions, and control-flow-sensitive refinements of values, which are all idiomatic in dynamic languages. It makes effective use of an off-the-shelf solver to decide problems without heavy encodings. The approach is competitive with a wide range of existing tools—including type systems, flow analyzers, and model checkers—on their own benchmarks. 5.4 Type Refinement for Static Analysis of JavaScript Ben Weidermann (Harvey Mudd College, US) Static analysis of JavaScript has proven useful for a variety of purposes, including optimization, error checking, security auditing, program refactoring, and more. A technique called type refinement that can improve the precision of such static analyses for JavaScript without any discernible performance impact. Refinement is a known technique that uses the conditions in branch guards to refine the analysis information propagated along each branch path. The key insight of this paper is to recognize that JavaScript semantics include many implicit conditional checks on types, and that performing type refinement on these implicit checks provides significant benefit for analysis precision. 5.5 Dynamic Determinacy Analysis Manu Sridharan (Samsung Research, US) License © Creative Commons BY 3.0 Unported license © Manu Sridharan Joint work of Schäfer, Max; Sridharan, Manu; Dolby, Julian; Tip, Frank URL http://dx.doi.org/10.1145/2499370.2462168 Programs commonly perform computations that refer only to memory locations that must contain the same value in any program execution. Such memory locations are determinate because the value they contain is derived solely from constants. We present a dynamic program analysis that computes a safe approximation of the determinacy of the memory locations referenced at each program point. We implemented this determinacy analysis for JavaScript on top of the node.js environment. In two case studies, we demonstrate how the results of determinacy analysis can be used for improving the accuracy of a standard static pointer analysis, and for identifying calls to eval that can be eliminated. 5.6 Performance Analysis of JavaScript Manu Sridharan (Samsung Research, US) License © Creative Commons BY 3.0 Unported license © Manu Sridharan Performance analysis for JavaScript is increasingly important, but difficult due to fragile interactions with JIT compilers and complex native APIs like the DOM. We propose an approach to profiling memory behavior of JavaScript code via heavyweight, platform-independent dynamic tracing and offline analysis, and we outline open challenges with this approach. 5.7 Checking Correctness of TypeScript Interfaces for JavaScript Libraries Anders Møller (Aarhus University, DK) License © Creative Commons BY 3.0 Unported license © Anders Møller Joint work of Møller, Anders; Feldthaus, Asger URL http://dx.doi.org/10.1145/2660151.2660215 The TypeScript programming language adds optional types to JavaScript, with support for interaction with existing JavaScript libraries via interface declarations. Such declarations have been written for hundreds of libraries, but they can be difficult to write and often contain errors, which may affect the type checking and misguide code completion for the application code in IDEs. We present a pragmatic approach to check correctness of TypeScript declaration files with respect to JavaScript library implementations. The key idea in our algorithm is that many declaration errors can be detected by an analysis of the library initialization state combined with a light-weight static analysis of library function code. Our experimental results demonstrate the effectiveness of the approach: it has found 142 errors in the declaration files of 10 libraries, with an analysis time of a few minutes per library and with a low number of false positives. Our analysis of how programmers use library interface declarations furthermore reveals some practical limitations of the TypeScript type system. 5.8 Analyzing JavaScript Web Applications in the Wild (Mostly) Statically Sukyoung Ryu (KAIST – Daejeon, KR) Analyzing real-world JavaScript web applications is a challenging task. On top of understanding the semantics of JavaScript, it requires modeling of web documents, platform objects, and interactions between them. Not only JavaScript itself but also its usage patterns are extremely dynamic. Most of web applications load JavaScript code dynamically, which makes pure static analysis approaches inapplicable. We present our attempts to analyze JavaScript web applications in the wild mostly statically using various approaches to analyze libraries. 6 Overview of Talks: Contracts 6.1 Membranes as Ownership Boundaries Tom Van Cutsem (Alcatel-Lucent Bell Labs – Antwerp, BE) We discuss the similarities and differences between membranes and higher-order contracts, give a brief overview of proxies in JS (which are the basic building block for membranes) and then show how membranes can be used to express the use cases typically expressed using ownership type systems. 6.2 TreatJS: Higher-Order Contracts for JavaScript Matthias Keil (Universität Freiburg, DE) TreatJS is a language embedded, dynamic, higher-order contract system for JavaScript. Beyond the standard abstractions for building higher-order contracts (base, function, and object contracts), TreatJS’ novel contribution is its support for boolean combinations of contracts and for the creation of parameterized contracts, which are the building blocks for dependent contracts and more generally run-time generated contracts. TreatJS is implemented using JavaScript proxies to guarantee full interposition for contracts and it exploits JavaScript’s reflective features to run contracts in a sandbox environment. This sandbox guarantees that contracts do not interfere with normal program execution. It also facilitates that all aspects of a contract are specified using the full JavaScript language. No source code transformation or change in the JavaScript run-time system is required. TreatJS including sandboxing, is formalized and the impact of contracts on execution speed is evaluated in terms of the Google Octane benchmark. 6.3 Contracts for Domain-Specific Languages in Ruby Jeffrey Foster (University of Maryland, US) This talk concerns object-oriented embedded DSLs, which are popular in the Ruby community but have received little attention in the research literature. Ruby DSLs implement language keywords as implicit method calls to self; language structure is enforced by adjusting which object is bound to self in different scopes. We propose RDL, a new contract checking system that can enforce contracts on the structure of Ruby DSLs, attributing blame appropriately. We describe RDL and RDLInfer, a tool that infers RDL contracts for existing Ruby DSLs. 7 Overview of Talks: Languages 7.1 HOP: A Multi-tier Language For Web Applications Tamara Rezk (INRIA Sophia-Antipolis, FR) We present HOP a multi-tier language to write web applications. We propose a small-step operational semantics to support formal reasoning in HOP. The semantics covers both server side and client side computations, as well as their interactions, and includes creation of web services, distributed client-server communications, concurrent evaluation of service requests at server side, elaboration of HTML documents, DOM operations, evaluation of script nodes in HTML documents and actions from HTML pages at client side. 7.2 Perl: The Ugly Parts Matthew Might (University of Utah, US) License ☑ Creative Commons BY 3.0 Unported license © Matthew Might Let there be no mistake: Perl is extremely useful. Every programmer needs Perl in their arsenal. Thanks to many implicit behaviors, some complex programs can be specified with alarming brevity. Perl excels at extracting and transforming data. But, Perl is as dangerous as it is ugly. This talk looks at the ugly. 7.3 So, What About Lua? Roberto Ierusalimschy (Pontifical University – Rio de Janeiro, BR) License ☑ Creative Commons BY 3.0 Unported license © Roberto Ierusalimschy Lua is a programming language developed at the Catholic University in Rio de Janeiro that came to be the leading scripting language in video games. Lua is also used extensively in embedded devices, such as set-top boxes and TVs, and other applications like Adobe Lightroom and Wikipedia. This talk presents a quick overview of some unconventional aspects of the language. 7.4 Regular Expression Parsing Bjorn Bugge Grathwohl (University of Copenhagen – DK) License ☑ Creative Commons BY 3.0 Unported license © Bjorn Bugge Grathwohl Joint work of Henglein, Fritz and Terp-Rasmussen, Ulrik Regular expressions (REs) are usually interpreted as languages. For many programming tasks, this is an inadequate interpretation, as it only provides the programmer with a means for testing language membership. Facilities for submatch extraction in tools such as sed and Perl-style REs have been developed to let programmers do data extraction and manipulation with REs. However, the submatch extraction approach is severely limited in its expressibility, as it only allows for a fixed number of submatches, independent of the input size. Instead, we interpret REs as types. Testing language membership is replaced by a parsing problem: Given an RE $E$ and string $s$, produce the value (parse tree) in the type $T(E)$ whose flattening is $s$. With this interpretation, data extraction and manipulation can be performed by writing functional programs that operate on the data types represented by the REs. We present two automata-based algorithms producing the greedy leftmost parse tree: The two-pass algorithm requires one pass over the input data and an extra pass over an auxiliary data structure; the streaming algorithm implements an optimally streaming parser, in the sense that as soon as the input read so far determines a prefix of all possible parse trees, this prefix is output. This is guaranteed given a PSPACE-complete analysis of the automaton, which can be performed independently of any input strings. However, we conjecture that for “realistic”, non-pathological, REs, this analysis is not needed. 7.5 HTML5 Parser Specification and Automated Test Generation Yasuhiko Minamide (University of Tsukuba, JP) License Creative Commons BY 3.0 Unported license © Yasuhiko Minamide Joint work of Minamide, Yasuhiko; Mori, Shunsuke URL http://dx.doi.org/10.1007/978-3-642-32759-9_26 The HTML5 specification includes the detailed specification of the parsing algorithm for HTML5 documents, including error handling. We develop a reachability analyzer for the parsing specification of HTML5 and automatically generate HTML documents to test compatibilities of Web browsers. The set of HTML documents are extracted using our reachability analysis of the statements in the specification. In our preliminary experiments, we generated 353 HTML documents automatically from a subset of the specification and found several compatibility problems by supplying them to Web browsers. 7.6 AmbientTalk: a scripting language for mobile phones Tom Van Cutsem (Alcatel-Lucent Bell Labs – Antwerp, BE) License Creative Commons BY 3.0 Unported license © Tom Van Cutsem We introduce the AmbientTalk programming language, which was designed to script collaborative distributed applications on mobile phones. We give an overview of the language’s features and historical roots. We discuss how AmbientTalk is embedded on the JVM, with particular attention to maintaining concurrency invariants. 7.7 Glue Languages Arjun Guha (University of Massachusetts – Amherst, US) License Creative Commons BY 3.0 Unported license © Arjun Guha Joint work of Guha, Arjun; Gupta, Nimish Puppet is a configuration management system used by thousands of organizations to manage thousands of machines. It is designed to automate tasks such as application configuration, service orchestration, VM provisioning, and more. The heart of Puppet is a declarative domain specific language that, to a first approximation, specifies a collection of resources (e.g., packages, user accounts, files, etc.) to install and the dependencies between them. Although Puppet performs some static checking, there are many opportunities for errors to occur in Puppet configurations. These errors are very difficult to detect and debug. Even if a configuration is itself bug-free, when a machine is upgraded to a new configuration, it is easy for the machine state and its specified configuration in Puppet to be inconsistent. 8 Overview of Talks: Security 8.1 Information Flow Control in WebKit’s JavaScript Bytecode Christian Hammer (Universität des Saarlandes, DE) Websites today routinely combine JavaScript from multiple sources, both trusted and untrusted. Hence, JavaScript security is of paramount importance. A specific interesting problem is information flow control (IFC) for JavaScript. In this paper, we develop, formalize and implement a dynamic IFC mechanism for the JavaScript engine of a production Web browser (specifically, Safari’s WebKit engine). Our IFC mechanism works at the level of JavaScript bytecode and hence leverages years of industrial effort on optimizing both the source to bytecode compiler and the bytecode interpreter. We track both explicit and implicit flows and observe only moderate overhead. Working with bytecode results in new challenges including the extensive use of unstructured control flow in bytecode (which complicates lowering of program context taints), unstructured exceptions (which complicate the matter further) and the need to make IFC analysis permissive. We explain how we address these challenges, formally model the JavaScript bytecode semantics and our instrumentation, prove the standard property of termination-insensitive non-interference, and present experimental results on an optimized prototype. 8.2 Hybrid Information Flow monitoring against Web tracking Thomas Jensen (INRIA Bretagne Atlantique – Rennes, FR) Motivated by the problem of stateless web tracking (fingerprinting), we propose a novel approach to hybrid information flow monitoring by tracking the knowledge about secret variables using logical formulae. This knowledge representation helps to compare and improve precision of hybrid information flow monitors. We define a generic hybrid monitor parametrised by a static analysis and derive sufficient conditions on the static analysis for soundness and relative precision of hybrid monitors. We instantiate the generic monitor with a combined static constant and dependency analysis. Several other hybrid monitors including those based on well-known hybrid techniques for information flow control are formalised as instances of our generic hybrid monitor. These monitors are organised into a hierarchy that establishes their relative precision. The whole framework is accompanied by a formalisation of the theory in the Coq proof assistant. 8.3 Intrusion Detection by Control Flow Analysis Arjun Guha (University of Massachusetts – Amherst, US) License Creative Commons BY 3.0 Unported license © Arjun Guha Joint work of Guha, Arjun; Krishnamurthi, Shriram; Jim, Trevor URL http://dx.doi.org/10.1145/1526709.1526785 We present a static control-flow analysis for JavaScript programs running in a web browser. Our analysis tackles numerous challenges posed by modern web applications including asynchronous communication, frameworks, and dynamic code generation. We use our analysis to extract a model of expected client behavior as seen from the server, and build an intrusion-prevention proxy for the server: the proxy intercepts client requests and disables those that do not meet the expected behavior. We insert random asynchronous requests to foil mimicry attacks. Finally, we evaluate our technique against several real applications and show that it protects against an attack in a widely-used web application. 8.4 Multiple Facets for Dynamic Information Flow Cormac Flanagan (University of California – Santa Cruz, US) License Creative Commons BY 3.0 Unported license © Cormac Flanagan Joint work of Flanagan, Cormac; Austin, Thomas H. URL http://dx.doi.org/10.1145/2103656.2103677 URL http://users.soe.ucsc.edu/~cormac/papers/popl12b.pdf JavaScript has become a central technology of the web, but it is also the source of many security problems, including cross-site scripting attacks and malicious advertising code. Central to these problems is the fact that code from untrusted sources runs with full privileges. We implement information flow controls in Firefox to help prevent violations of data confidentiality and integrity. Most previous information flow techniques have primarily relied on either static type systems, which are a poor fit for JavaScript, or on dynamic analyses that sometimes get stuck due to problematic implicit flows, even in situations where the target web application correctly satisfies the desired security policy. We introduce faceted values, a new mechanism for providing information flow security in a dynamic manner that overcomes these limitations. Taking inspiration from secure multi-execution, we use faceted values to simultaneously and efficiently simulate multiple executions for different security levels, thus providing non-interference with minimal overhead, and without the reliance on the stuck executions of prior dynamic approaches. 8.5 Shill: shell scripting with least authority Christos Dimoulas (Harvard University, US) License © Creative Commons BY 3.0 Unported license © Christos Dimoulas Joint work of Moore, Scott; Dimoulas, Christos; King, Dan; Chong, Stephen The Principle of Least Authority suggests that software should be executed with no more authority than it requires to accomplish its task. Current security tools make it difficult to apply this principle: they either require significant modifications to applications or do not facilitate reasoning about combining untrustworthy components. We propose Shill, a secure shell scripting language. Shill scripts enable compositional reasoning about security through declarative security policies that limit the effects of script execution, including the effects of programs invoked by the script. These security policies are a form of documentation for consumers of Shill scripts, and are enforced by the Shill execution environment. We have implemented a prototype of Shill for FreeBSD. Our evaluation indicates that Shill is a practical and useful system security tool, and can provide fine-grained security guarantees. 8.6 Hybrid Information Flow Analysis for JavaScript Tamara Rezk (INRIA Sophia-Antipolis, FR) License © Creative Commons BY 3.0 Unported license © Tamara Rezk We propose a novel type system for securing information flow in JavaScript that takes into account the defining features of the language, such as prototypical inheritance, extensible objects, and constructs that check the existence of object properties. The type system infers a set of assertions under which a program can be securely accepted and instruments it so as to dynamically check whether these assertions hold. By deferring rejection to run-time, the hybrid version can typecheck secure programs that purely static type systems cannot accept. 8.7 A Collection of Real World (JavaScript) Security Problems: Achim D. Brucker (SAP Research – Karlsruhe, DE) License © Creative Commons BY 3.0 Unported license © Achim D. Brucker JavaScript is gaining more and more popularity as an implementation language for various applications types such as Web applications (client-side), mobile applications, or server-side applications. We outline a few security challenges that need to be prevented in such applications and, thus, for which there is a demand for analysis methods that help to detect them during development. 9 Lightning Talks 9.1 Reasoning about membranes using separation logic Gareth Smith (Imperial College – UK) License @ Creative Commons BY 3.0 Unported license © Gareth Smith URL http://www.dagstuhl.de/mat/Files/14/14271/14271.SmithGareth.Other.pdf We propose an extension to separation logic which would make it possible to statically prove security properties of an implementation of a membrane program. 9.2 Complexity Analysis of Regular Expression Matching Based on Backtracking Yasuhiko Minamide (University of Tsukuba, JP) License @ Creative Commons BY 3.0 Unported license © Yasuhiko Minamide Joint work of Sugiyama Satoshi; Minamide, Yasuhiko Regular expression matching is implemented with backtracking in most programming languages. Its time complexity is exponential on the length of a string in worst case. This high complexity causes significant problems in practice. It causes DoS vulnerabilities in server-side applications. It may also affect the result of matching in some implementation with a limit on the number steps in matching, e.g. PCRE. We present a decision procedure to check whether for a given regular expression matching based on backtracking runs in linear time. 9.3 PHPEnkoder: a Wordpress Plugin Michael Greenberg (Princeton University, US) License @ Creative Commons BY 3.0 Unported license © Michael Greenberg URL http://wordpress.org/plugins/php-enkoder/ PHPEnkoder encodes mailto: links and e-mail addresses with JavaScript to stifle webcrawlers. It works by automatically turning plaintext e-mails into (enkoded) links. Interesting facts: - Wordpress plugins are installed by being placed in a directory; the files are run at the top level. - Wordpress plugins are automatically released by tagging in subversion. - PHPEnkoder parses the page with regular expressions, since Wordpress ‘hooks’ don’t give PHPEnkoder an AST to process, just text. - Wordpress has an extremely stable API. For more on this plugin, see http://www.weaselhat.com/phpenkoder/. Static application security testing (SAST) is a widely used technique that helps to find security vulnerabilities in program code at an early stage in the software development lifecycle. Since a few years, JavaScript is gaining more and more popularity as an implementation language for large applications. Consequently, there is a demand for SAST tools that support JavaScript. We report briefly on our method for evaluating SAST tools for JavaScript as well as summarize the results of our analysis. References 10 Breakout Sessions In addition to the contributed talks, the seminar had four breakout sessions focusing on cross-cutting issues deemed important by the participants. 10.1 Contracts and Blame Cormac Flanagan We discussed some of the counter-intuitive ways in which contracts can fail in systems with multiple modules, and the ways in which blame may be assigned in a manner that may not point at the component that is truly at fault. 10.2 On the Role of Soundness Matthew Might, Jeffrey Foster We debated the merits and importance of soundness of tools and analyses for scripting languages. On the one hand, while soundness is essential for relying upon the results of the analysis, on the other, some constructs may be pathologically hard to analyze soundly and even unsound tools may provide extremely invaluable feedback to the developer. 10.3 Metrics for Programming Tools Krishnamurthi, Shriram; Politz, Joe Gibbs License Creative Commons BY 3.0 Unported license © Krishnamurthi, Shriram; Politz, Joe Gibbs URL https://drive.google.com/file/d/0B32bNEogmncORS1sN0YtaXZ3V1k/edit?usp=sharing We gathered metrics for measuring the utility of programming language tools (focused on scripting language applications), prompted by considering alternatives and complements to soundness. See sketch on the blackboard below. 10.4 JavaScript Analysis and Intermediate Representation Thomas Jensen (INRIA Bretagne Atlantique – Rennes, FR) License Creative Commons BY 3.0 Unported license © Thomas Jensen Joint work of Jensen, Thomas; Sridharan, Manu Two issues were discussed: - how to share models of libraries, - can we come up with a common intermediate representation for JS analyzers. The overall goal is to support a re-usable, shared effort. Modeling libraries is not very publishable, hence the need for a collective effort. Another issue is that different kind of models are needed, depending on the analysis. Nevertheless, it was deemed worth to have a common starting point. Models could be written in JS or in an IR or in a formalism that allows integrating elements of abstract domains. One point of view was that it would be valuable to have models satisfying that everything is translatable to the IR, so that different library models can co-exist. Concerning the IR, several points were discussed: - Should it accommodate pre/post annotations to model libraries? - Should it be executable (could enable re-injecting into JS to do dynamic analysis)? There is a certain amount of common structure in existing IR so why not just pick one of those. Some shortcomings were discussed: WALA: not serializable, which is necessary, S5: should be OK, can be ANF-ed and CPS-ed, MSR IR: has existing formats but prepared to do a clean slate Two different kind of formats were identified: a CFG or something close to the AST. Perhaps there is a need for a series of IR that end in the common format but maximum two seems reasonable to standardize. The discussion ended with a presentation of a proposal for a common IR. The current version can be found at the URL above. Participants - Achim D. Brucker SAP Research – Karlsruhe, DE - Niels Bjoern Bugge Grathwohl University of Copenhagen, DK - Ravi Chugh University of California – San Diego, US - Arlen Cox Univ. of Colorado – Boulder, US - Christos Dimoulas Harvard University, US - Julian Dolby IBM TJ Watson Research Center – Hawthorne, US - Matthias Felleisen Northeastern University – Boston, US - Daniele Filaretti Imperial College London, GB - Cormac Flanagan University of California – Santa Cruz, US - Jeffrey Foster University of Maryland – College Park, US - Ronald Garcia University of British Columbia – Vancouver, CA - Philippa Gardner Imperial College London, GB - Michael Greenberg Princeton University, US - Arjun Guha University of Massachusetts – Amherst, US - Sha-Yu Guo MOZILLA – Mountain View, US - Christian Hammer Universität des Saarlandes, DE - Fritz Henglein University of Copenhagen, DK - Roberto Ierusalimschy PUC – Rio de Janeiro, BR - Thomas Jensen INRIA Bretagne Atlantique – Rennes, FR - Ranjit Jhala University of California – San Diego, US - Matthias Keil Universität Freiburg, DE - Shriram Krishnamurthi Brown University, US - Benjamin Lerner Brown University, US - Benjamin Livshits Microsoft Res. – Redmond, US - Sergio Maffeis Imperial College London, GB - Matt Might University of Utah, US - Yasuhiko Minamide University of Tsukuba, JP - Anders Møller Aarhus University, DK - Joe Gibbs Politz Brown University, US - Ulrik Terp Rasmussen University of Copenhagen, DK - Tamara Rezk INRIA Sophia Antipolis – Méditerranée, FR - Tiark Rompf EPFL – Lausanne, CH - Sukyoung Ryu KAIST – Daejeon, KR - Alan Schmitt INRIA Bretagne Atlantique – Rennes, FR - Jeremy G. Siek Univ. of Colorado – Boulder, US - Gareth Smith Imperial College London, GB - Manu Sridharan Samsung Research, US - Éric Tanter University of Chile, CL - Peter Thiemann Universität Freiburg, DE - Sam Tobin-Hochstadt Indiana University – Bloomington, US - Tom Van Cutsem Alcatel-Lucent Bell Labs – Antwerp, BE - David Van Horn University of Maryland – College Park, US - Panagiotis Vekris University of California – San Diego, US - Ben Wiedermann Harvey Mudd College – Claremont, US - Kwangkeun Yi Seoul National University, KR
{"Source-Url": "http://static-curis.ku.dk/portal/files/168286558/dagrep_v004_i006_p084_s14271_1.pdf", "len_cl100k_base": 10083, "olmocr-version": "0.1.50", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 52529, "total-output-tokens": 11927, "length": "2e13", "weborganizer": {"__label__adult": 0.00037384033203125, "__label__art_design": 0.00028824806213378906, "__label__crime_law": 0.00022554397583007812, "__label__education_jobs": 0.0009565353393554688, "__label__entertainment": 7.271766662597656e-05, "__label__fashion_beauty": 0.00012981891632080078, "__label__finance_business": 0.00014352798461914062, "__label__food_dining": 0.0002655982971191406, "__label__games": 0.0004448890686035156, "__label__hardware": 0.0004813671112060547, "__label__health": 0.0003693103790283203, "__label__history": 0.0001804828643798828, "__label__home_hobbies": 6.222724914550781e-05, "__label__industrial": 0.00023674964904785156, "__label__literature": 0.0002727508544921875, "__label__politics": 0.00020885467529296875, "__label__religion": 0.0004227161407470703, "__label__science_tech": 0.00518798828125, "__label__social_life": 9.781122207641602e-05, "__label__software": 0.004375457763671875, "__label__software_dev": 0.984375, "__label__sports_fitness": 0.0002315044403076172, "__label__transportation": 0.0003495216369628906, "__label__travel": 0.00016045570373535156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51922, 0.02112]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51922, 0.26465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51922, 0.85144]], "google_gemma-3-12b-it_contains_pii": [[0, 539, false], [539, 2660, null], [2660, 6007, null], [6007, 8168, null], [8168, 10243, null], [10243, 11443, null], [11443, 12686, null], [12686, 14828, null], [14828, 16950, null], [16950, 19110, null], [19110, 21430, null], [21430, 23591, null], [23591, 26353, null], [26353, 28337, null], [28337, 30399, null], [30399, 33127, null], [33127, 35736, null], [35736, 38143, null], [38143, 41053, null], [41053, 43589, null], [43589, 45761, null], [45761, 47388, null], [47388, 49365, null], [49365, 49622, null], [49622, 51922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 539, true], [539, 2660, null], [2660, 6007, null], [6007, 8168, null], [8168, 10243, null], [10243, 11443, null], [11443, 12686, null], [12686, 14828, null], [14828, 16950, null], [16950, 19110, null], [19110, 21430, null], [21430, 23591, null], [23591, 26353, null], [26353, 28337, null], [28337, 30399, null], [30399, 33127, null], [33127, 35736, null], [35736, 38143, null], [38143, 41053, null], [41053, 43589, null], [43589, 45761, null], [45761, 47388, null], [47388, 49365, null], [49365, 49622, null], [49622, 51922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51922, null]], "pdf_page_numbers": [[0, 539, 1], [539, 2660, 2], [2660, 6007, 3], [6007, 8168, 4], [8168, 10243, 5], [10243, 11443, 6], [11443, 12686, 7], [12686, 14828, 8], [14828, 16950, 9], [16950, 19110, 10], [19110, 21430, 11], [21430, 23591, 12], [23591, 26353, 13], [26353, 28337, 14], [28337, 30399, 15], [30399, 33127, 16], [33127, 35736, 17], [35736, 38143, 18], [38143, 41053, 19], [41053, 43589, 20], [43589, 45761, 21], [45761, 47388, 22], [47388, 49365, 23], [49365, 49622, 24], [49622, 51922, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51922, 0.0]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
67b651f03ce5452ff7b9adaaaadc378aa889ed2c
Semantic Programming by Example with Pre-trained Models GUST VERBRUGGEN, KU Leuven, Belgium VU LE, Microsoft, USA SUMIT GULWANI, Microsoft, USA The ability to learn programs from few examples is a powerful technology with disruptive applications in many domains, as it allows users to automate repetitive tasks in an intuitive way. Existing frameworks on inductive synthesis only perform syntactic manipulations, where they rely on the syntactic structure of the given examples and not their meaning. Any semantic manipulations, such as transforming dates, have to be manually encoded by the designer of the inductive programming framework. Recent advances in large language models have shown these models to be very adept at performing semantic transformations of its input by simply providing a few examples of the task at hand. When it comes to syntactic transformations, however, these models are limited in their expressive power. In this paper, we propose a novel framework for integrating inductive synthesis with few-shot learning language models to combine the strength of these two popular technologies. In particular, the inductive synthesis is tasked with breaking down the problem in smaller subproblems, among which those that cannot be solved syntactically are passed to the language model. We formalize three semantic operators that can be integrated with inductive synthesizers. To minimize invoking expensive semantic operators during learning, we introduce a novel deferred query execution algorithm that considers the operators to be oracles during learning. We evaluate our approach in the domain of string transformations: the combination methodology can automate tasks that cannot be handled using either technologies by themselves. Finally, we demonstrate the generality of our approach via a case study in the domain of string profiling. CCS Concepts: • Software and its engineering → Automatic programming; • Computing methodologies → Artificial intelligence. Additional Key Words and Phrases: program synthesis, programming by example, language models ACM Reference Format: 1 INTRODUCTION Teaching a machine to write programs that satisfy a given specification is widely regarded as one of the fundamental problems in artificial intelligence. More specifically, the task of inductive synthesis or programming by example, where the specification is given by (partial) examples of the desired output on given input, allows for the automation of repetitive tasks in a variety of domains. Examples of domains in which robust synthesizers have been rapidly adapted in industrial tools are IntelliCode suggestions for code refactoring in Visual Studio [Gao et al. 2020; Miltner et al. 2019; Rolim et al. 2017], extracting tabular data in PowerQuery [Le and Gulwani 2014] and most famously the FlashFill algorithm for performing string transformations in Excel [Gulwani 2011]. Semantics Current approaches in inductive synthesis are limited to writing programs that perform only syntactic transformations of the input. All information required to perform such a syntactic transformation is either available from the specification or has to be explicitly encoded in the domain-specific language used by the synthesizer. A popular scenario that is often used to emphasize this limitation in the context of FlashFill is shown in Figure 1a. Without explicitly encoding information about months, the synthesizer makes a valiant attempt using only syntactic information—concatenating the constant “ember” to the input—but fails miserably. Explicitly encoding such information works for limited domains, such as dates, but quickly becomes infeasible as the number of domains grows, or when support for multiple languages or natural language processing is required for more complicated tasks. Recent advances in transformer architectures for large, autoregressive language models have shown that these models can perform few-shot learning without fine-tuning [Brown et al. 2020; Radford et al. 2018]. Given a short prompt of text, the autoregressive model returns a distribution of likely continuations of this snippet of text. By structuring the prompts in a specific format, for example, the question answering format in Figure 1b, the model adapts to the given task at inference time and effectively solves the given problem with just a few given examples. A first key observation is that these models are trained on vast amounts of data and have been shown to contain a lot of information about the world [Petroni et al. 2019] and that querying these models for this information through prompts neatly integrates with the kind of specifications that are used in program synthesis. The prompt describes a specification on the output of the model by providing a few input and output examples in a designated format, just like an inductive specification does the same for inductive synthesis. A second observation is that language models use subword tokens to keep their vocabulary small [Sennrich et al. 2016] and the output is generated token by token. This allows simple, syntactic string transformation problems to be solved, but more complicated problems either require many examples or are not solved at all. Substring extraction based on regular expressions is hard, while operating on data structures other than strings is even harder. A simple task like extracting a constant number of characters from each word in a list of words is impossible if there is no combination of tokens that corresponds exactly to this substring. Based on these two observations, we propose a novel integration of pre-trained, autoregressive language models with inductive synthesis. These few-shot learners are used to introduce semantic operators to the underlying domain-specific language, which the synthesizer can then use to solve a new class of mixed syntactic and semantic problems, such as the one in Figure 1c. More concretely, we introduce three new learnable semantic operators that map a string to another string (for semantic lookup), to an integer (for indexing into the input string), or a to boolean (for conditional logics). They are learnable in the sense that their concrete executable semantics depend on each problem instance and are determined during the inductive synthesis process. However, during learning, the inductive synthesizer makes many calls to these operators. Because making queries to the language model is slow, learning becomes unfeasible in practice. To that end, we introduce a deferred querying algorithm, which assumes these operators to be oracles during learning and uses the ranking step of inductive synthesis to pick the correct program. We have implemented this integration in the task domain of string transformations using the PROSE framework [Microsoft 2015] as FlashGPT3 and evaluate it on a collection of challenging transformation problems. Using the deferred querying algorithm, FlashGPT3 learns most programs in under 1s, with the most difficult problems taking less than 3s. On its own, GPT-3 solves fewer problems and typically requires significantly more examples on the problems that it can solve. Additionally, we demonstrate that semantic operators can be given descriptive names (for better program readability) and how these named semantic operators can be integrated in the task domain of string profiling. 1.1 Contributions In summary, we make the following contributions. - We propose a novel framework of integrating pre-trained language models with inductive synthesis by augmenting the language over which programs are synthesized with semantic operators that are powered by the language model. - We present a deferred execution algorithm for quickly learning programs with these semantic operators under the uncertainty of the language models. - We implement and evaluate this integration in the domain of string transformation problems. - We present a case study on integrating semantics with string profiling. 2 MOTIVATING EXAMPLES We start by illustrating some repetitive task settings that have been disrupted by inductive synthesis and that would further benefit from semantic components. The core idea behind these inductive synthesis systems is to define an appropriate domain-specific language (DSL) that can succinctly represent various tasks in an underlying domain, and to describe an appropriate learning algorithm over the DSL [Gulwani et al. 2012]. In this section, we motivate the significance of extending such DSLs with semantic components. In Sections 4 (language) and 5 (learning), we show how learning can be performed over these semantic components. 2.1 String transformations Transforming strings by example is one of the most commonly used benchmarks in inductive synthesis. A major breakthrough in this domain was the FlashFill algorithm [Gulwani 2011]. Its ability to quickly and robustly learn string transformation programs from few examples has helped shipping it in Microsoft Excel. FlashFill is widely recognized as one of the first commercial applications of inductive program synthesis. FlashFill turned out to be a very popular feature in Excel, not least because 99% of spreadsheet users do not know programming and struggle with repetitive tasks. Consider the task of formatting a phone number as shown in Table 2a. From the first four rows, FlashFill is able to learn a program "(" ◦ SubStr2(v1, \d+, 1) ◦ ")" ◦ SubStr2(v1, \d+, 1) ◦ "" ◦ SubStr2(v1, \d+, 2) that performs this transformation, where "quoted" strings are constants, ◦ denotes concatenation such that a ◦ b ≡ Concat(a, b) and SubStr2(s, r, i) extracts the i^th^ token that matches regular (a) Formatting phone numbers [Gulwani 2011]. (b) Formatting phone numbers by looking up the country code from its name. Fig. 2. Examples of repetitive transformation tasks. <table> <thead> <tr> <th>Input</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>323-708-7700</td> <td>323-708-7700</td> </tr> <tr> <td>(425)-706-7709</td> <td>425-706-7709</td> </tr> <tr> <td>510.220.5586</td> <td>510-220-5586</td> </tr> <tr> <td>425-235-7654</td> <td>425-235-7654</td> </tr> <tr> <td>425/745.8139</td> <td>425-745-8139</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Input $v_1$</th> <th>Input $v_2$</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>235-7654</td> <td>Taiwan</td> <td>(866) 235 7654</td> </tr> <tr> <td>174.5539</td> <td>Spain</td> <td>(34) 174 5539</td> </tr> <tr> <td>(254) 9620</td> <td>South Korea</td> <td>(82) 254 9620</td> </tr> <tr> <td>618 4390</td> <td>Panama</td> <td>(507) 618 4390</td> </tr> <tr> <td>447/4350</td> <td>Netherlands</td> <td>(31) 447 4350</td> </tr> </tbody> </table> (a) Generating exercises on completing present or past simple forms. (b) Marking pronouns. Solutions can be generated using markup style syntax. Fig. 3. Examples of problems and solutions from worksheets on English grammar. Another instance of string transformations that FlashFill struggles with is generating educational material for language learning. Textbooks and worksheets often require students to fill in gaps in sentences, to build sentences from abstract descriptions, to mark parts of sentences or to perform other manipulations of given sentences and words. Some examples of exercises are shown in Figure 3. The process of coming up with sentences, turning them into exercises and generating solution sheets is a repetitive process. Natural language sentences do not contain syntactic clues that FlashFill can use to determine positions for extracting substrings. Determining these positions requires understanding of natural language, and generating exercises in a specific format, or solutions with a specific markup, requires syntactic manipulations. Our method learns anonymous, semantic string $\rightarrow$ int operators such as $\\text{positionLeftOfPredicate}(x)$ or $\\text{positionRightOfSubject}(x)$ that exploit the ability of GPT-3 to parse the grammatical structure of sentences and return positions in the sentence that FlashFill can use. The full program that solves the task in Figure 3b is $$\text{SubStr}(v_1, 0, p_{\text{Left}}) \circ "<u>" \circ \text{SubStr}(v_1, p_{\text{Left}}, p_{\text{Right}}) \circ "</u>" \circ \text{SubStr}(v_1, p_{\text{Right}}, -1)$$ where $p_{\text{Left}} = \\text{positionLeftOfPronoun}(v_1)$, $p_{\text{Right}} = \\text{positionRightOfPronoun}(v_1)$ and integers are absolute positions in the string. $^1$Please refer to [Gulwani 2011] and [Polozov and Gulwani 2015] for a detailed overview of the FlashFill syntax. def attrs(o, log): ... def exec(c): ... def attributes(o, log): ... def multiply(x, y): ... def execute(c): ... c) Adding comments with descriptions (semantic). Fig. 4. Example scenarios of (1) user making an edit and (2) the system suggesting another location to make the same edit. The user is able to click on the light bulb to see the suggestion and accept it, after which it is applied by the system (3). Today, BLUE-PENCIL can only do scenario (a). With our proposed integration, it is able to do (b) and (c) as well. 2.2 Refactoring When changing or refactoring code, developers often find themselves propagating an intended change at multiple places in the codebase. The BLUE-PENCIL algorithm tackles the problem of repetitive refactoring by making on the fly code suggestions [Miltner et al. 2019]. It looks at what the developer is doing, identifies repetitive edits and makes real time suggestions, rather than requiring the developer to explicitly provide the system with input-output examples. For instance, consider a developer who needs to add a new logger argument to all functions in a project, as shown in Figure 4a. Blue-Pencil sees the user making one edit, recognizes different locations where a function is defined, suggests applying the same transformation at those locations and then automatically applies the transformation if the user accepts the suggestion. However, Blue-Pencil only supports syntactic transformations and fails at tasks that requires semantic knowledge, such as the one in Figure 4b. In this task, a developer wishes to change naming convention from abbreviated forms to full names. Given the ability to learn renaming programs for symbols in the AST, this problem is very similar to learning semantic string transformation programs. Adding semantic operators enables renaming with natural language understanding. Another repetitive yet crucial task in programming that involves natural language is writing documentation. By extending the transformation language to operate on full syntax trees, which keep formatting and comments to guarantee lossless conversion between the syntax tree and source code, these semantic operations become even more powerful. This functionality is illustrated in Figure 4c, where descriptive comments are automatically suggested by synthesizer as it learned a getDescription(\(x\)) function. 2.3 Profiling The goal of string profiling is to learn succinct regular expression patterns that describe a collection of strings. These profiles are useful for a myriad of applications, from checking the quality of data, computing the syntactic similarity between strings, tagging large datasets with column metadata [Song and He 2021] and making string transformation synthesizers more robust by improving the ranking of programs [Ellis and Gulwani 2017] or learning separate programs for examples with different profiles [Padhi et al. 2018]. Fig. 5. Excerpts of candidate datasets for automatic string profiling. The recent FlashProfile [Padhi et al. 2018] algorithm uses an inductive synthesis based approach by representing these patterns as programs. Let an atom be a function $f : \text{string} \rightarrow \text{int}$ that returns the length of the longest prefix that it matches of a given string. A program is simply a sequence of atoms that matches a string if each atom matches the suffix of matching all preceding atoms. FlashProfile supports syntactic atoms that match constant strings, regular expressions, character classes and arbitrary functions. As an example, consider the dates in Figure 5a. Two profiles $$\text{Digit}^4 \circ Punct \circ \text{Digit}^2 \circ Punct \circ \text{Digit}^2$$ $$\text{TitleWord} \circ \text{Space} \circ \text{Digit}^2 \circ " \circ " \circ \text{Digit}^4$$ are learned, where $\circ$ denotes the concatenation of atoms, "quoted" strings are constants and $\text{Digit}^n$ matches $n$ digits. By asking for an output example for each pattern, the number of examples required to transform these dates into a standard format decreases. Next, consider the strings in Figure 5b. Syntactic patterns struggle to (i) capture symbols in strings and (ii) whether and how to group the words after the " - " or not. A semantic pattern can distinguish the colors and carriers without falling victim to irregular characters. An example pattern is $$"\text{iPhone 11 } " \circ \text{Digit}^+ \circ " \text{GB - } \circ \text{matchColor} \circ \text{matchCarrier}$$ where matchX are anonymous semantic atoms. Our integration allows to learn exactly these kinds of semantic atoms. 3 BACKGROUND Our proposed integration builds on the idea of decomposing the inductive synthesis problem into smaller subproblems and using the neural model to solve those subproblems that cannot be solved syntactically. The FlashMeta framework performs this kind of synthesis decomposition using deductive backpropagation [Polozov and Gulwani 2015]. In this section, we introduce the FlashMeta framework, as well as the specific flavour of neural network that we can use to solve those semantic problems that cannot be further decomposed using FlashMeta. In later sections, we show how to integrate such neural networks in the FlashMeta architecture. 3.1 FlashMeta All synthesizers that we described in the previous section are instantiations of the FlashMeta framework. More specifically, they are implemented using the publicly available implementation of this framework called PROSE. In the PROSE framework, developers can define a DSL (as a context-free grammar) and provide an executable function for each operator in the DSL. Given a specification of a program, typically as a set of input-output examples, the PROSE framework then provides synthesis strategies to search for a program over this DSL that satisfies the given specification. The main synthesis strategy is called *deductive backpropagation*, which recursively breaks down a problem into smaller subproblems that, once solved, can be used by a specific operator to solve the bigger problem. The logic of how a problem is to be broken down in subproblems is given by *witness functions* for each argument of each operator in the DSL. Given an operator and a specification, the witness function for a parameter of this operator should return specifications that the parameter should satisfy in order for the operator to satisfy the given specification. **Example 3.1.** Suppose we have an operator \( \text{sum}(a, b) \) that sums two integers and a specification that says that the output of this operator on some input \( \sigma \) should be 5, which we write as \( \sigma \rightarrow 5 \). The witness function for the argument \( a \) of \( \text{sum} \) then needs to answer what the value of \( a \) can be, for example, an integer \( \in [1, 4] \). It returns a disjunctive specification \( \sigma \rightarrow 1 \lor 2 \lor 3 \lor 4 \) with all possible values of \( a \). Following the body of a rule with \( a \) as head in the grammar, the algorithm then continues to look for a way to satisfy this specification—to make \( a \) be one of the allowed values. Rather than a single program, PROSE returns a set of programs that satisfy the specification, represented by a version space, and allows operations on these program sets. The intersection between two program sets is an important operation, which allows to compute the set of programs that satisfy multiple specifications. In the original FlashFill setting, this corresponds to learning a program set for each individual row and then taking the intersection over these program sets to find those programs that correctly transform all rows. Finally, PROSE ranks the programs in the resulting program set, and allows custom scoring functions to be specified for each operator. The final score is computed bottom-up, with scoring functions for operators typically aggregating the scores obtained for their arguments. **Example 3.2.** In the previous example, we can assign higher scores to lower constant numbers and use the scoring function for \( \text{sum}(a, b) \) to assign calls in which \( a < b \) a higher score. It is exactly this breaking down of a problem in smaller subproblems, intersection over different examples and ranking that makes the PROSE framework an excellent candidate for integrating semantic operators powered by few-shot learning neural networks. In order to do so, we need to (i) define the operators with their semantics, (ii) define witness functions that specify how to learn the arguments to these operators and (iii) describe how programs with semantic operators should be ranked. ### 3.2 Generative Language Models In language modelling, a common task is to predict the next token for a given set of input tokens. Repeated application of this process allows large language models to effectively generate text when given an initial sequence of context tokens. Such autoregressive language generation quickly generated popularity with the impressive results obtained by the GPT models, which combine a transformer architecture, unsupervised pre-training, and millions of parameters. Recently, it was shown that such models are able to learn a task by encoding a few input and output examples of this task in the context [Brown et al. 2020]. An example task was already shown in Figure 1a, where the goal was to map the first three letters of a month to its full name. More general examples of tasks are machine translation, question answering and determining which word a pronoun refers to. This ability to detect the task is called *in-context learning* and doing it from few examples is called *few-shot learning*. **3.2.1 Abilities.** We observe three tasks that GPT-3 is able to learn—closed book question answering, natural language understanding and text classification—and that can be used to create useful semantic operators. Closed book question answering. In closed book question answering, the goal is to answer a question about factual knowledge without access to a document that contains evidence [Roberts et al. 2020]. An example question from the TriviaQA dataset [Joshi et al. 2017] is “What does a manometer measure?” and the expected answer is “pressure”. It was shown that large language models are able to store a lot of such knowledge in their parameters, with GPT-3 beating fine-tuned models that have explicit access to Wikipedia [Brown et al. 2020]. Figure 6a shows a query for one-shot QA using GPT-3. We exploit this knowledge to learn semantic mappings, such as mapping countries to their language code in the getCountryCode(\(x\)) function or expanding abbreviated month names in the getFullName(\(x\)) function. Natural language understanding. We consider natural language understanding to encompass concrete tasks like part-of-speech tagging, role labeling and cloze tests. As GPT-3 is a generative model, these tasks are also framed as a QA prompt, but a different skill is required to solve it. Figure 6b shows an example of such prompt, where the goal is to tag the predicate of a question. We exploit this ability to recognize parts of sentences for extracting semantic locations in the input, for example, in the getPositionLeftOfPronoun(\(x\)) function. Classification. Finally, GPT-3 is able to perform text classification in a similar fashion. An example of classifying whether an athlete is a basketball player or a soccer player is shown in Figure 6c. We exploit the ability to semantically classify text to learn semantic matching functions, such as the matchColor() atom. **Algorithm 1** Build query from examples ``` Require: list of tuples \( Q \) Require: template \( T : \text{string} \times \text{string} \rightarrow \text{string} \) function BuildPrompt(\( Q, T \)) \[ Q \leftarrow \text{Map}(\lambda(q,a) \Rightarrow T(q,a), Q) \] return \( \text{Join}(Q, "\"\") \) ``` **Algorithm 2** Semantic function through QA ``` Require: list of examples \( E \) and new input \( x \) Require: list of allowed output tokens \( L \) function QA(\( E, x, L \)) \[ \text{prompt} \leftarrow \text{BuildPrompt}(E + (x, \epsilon), T) \] return \( M(\text{prompt}, L) \) ``` 4 SEMANTIC OPERATORS We introduce three generic semantic operators that each exploit one of the identified abilities of GPT-3. Each of these operators builds a prompt, performs a query and parses the result. The data used to construct the prompt is made an argument of the operators. Learning a specific operator, such as extracting country codes, then corresponds to learning the argument. As FlashMeta is designed to support operator reuse, these operators can be easily integrated into another DSL. Throughout this section, we use integration with the FlashFill DSL [Gulwani 2011; Polozov and Gulwani 2015] to illustrate the new operators. The augmented DSL is shown in Figure 7. We use the following syntactic sugar for readability purposes; top-level conditionals are omitted when not applicable, the shorthand notation \( F(v_i, -) \) is used over \( \text{let } s = \text{std.Kth}(vs, i) \text{ in } F(s, -) \). 4.1 Maps The semantic map operator \( \text{SemMap}(v, Q) \) is used to look up semantic properties of the input. Listing 1 show its executable semantics, which simply calls GPT-3 using \( Q \) as examples and \( v \) as the new question. Semantic map requires no additional logic on top of the QA function. **Example 4.1.** Let us revisit the example of country codes from Figure 1c. The \( \text{getCountryCode}(x) \) function can be easily represented as a semantic map by building an appropriate set of input-output examples \( Q \). The full program becomes: ``` language FlashFill; using FlashgGPT3; @output string start := e | std.ITE(cond, e, start); string e := f | Concat(f, e); string f := ConstStr(w) | SubStr(vi, pp) | SemMap(vi, Q); Tuple<int, int> pp := std.Pair(pos, pos); int pos := AbsPos(x, k) | RegPos(x, rr k) | SemPos(x, Q, m); Tuple<Regex, Regex> rr := std.Pair(r, r); bool cond := Match(vi, r, k) | SemMatch(vi, P, N); @input string[] vs; string w; int k; Regex r; Tuple<string, string>[] Q; string[] P,N; string m; ``` string SemMap(string v, Query Q) { return QA(Q, v); } Listing 1. Executable semantics of the semantic mapping function. int SemPos(string x, Tuple<string, string>[] Q, string d) { string answer = QA(Q, x); MatchCollection ms = x.Matches(new Regex(answer))[0]; if (ms.Count() != 1): return null; return (d == "L") ? ms[0].Index : ms[0].Index + x.Length; } Listing 2. Executable semantics of the semantic position logic. The AllWordsIn function extracts all words from the given list of strings, which are used to constrain the output to return only words in x. ConstStr("(") ◦ SemMap(v, Q) ◦ ConstStr(")") ◦ SubStr2(v, NumTok, 1) ◦ ConstStr(" ") ◦ SubStr2(v, NumTok, 2) Q = [("Taiwan", "886"), ("Spain", "34"), ("South Korea", "82")] Example 4.2. In textbooks and course notes on grammar, examples of irregular forms are often provided. For grammatical constructs with many irregular forms, such as plurals or tenses, generating and formatting these examples is a very repetitive task. We can use the power of language models to easily generate formatted examples. Consider, for example, a table of comparative and superlative adjectives. <table> <thead> <tr> <th>Input v₁</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>good</td> <td>good – better – best</td> </tr> <tr> <td>old</td> <td>old – older – oldest</td> </tr> <tr> <td>many</td> <td>many – more – most</td> </tr> </tbody> </table> Using the power of lookup, such tables can be easily generated from just the base adjective with the following transformation program: v₁ ◦ ConstStr(" - ") ◦ SemMap(v₁, Q1) ◦ ConstStr(" - ") ◦ SemMap(v₁, Q2) Q₁ = ["good", "better"], ("old", "older"), ("many", "more") Q₂ = ["good", "best"], ("old", "oldest"), ("many", "most") 4.2 Position logic Position logic is used to determine interesting locations in the input, which is useful in tasks that involve formatting and extraction. Semantic positions are similar to regular expression positions. We use the model to select a substring from the input and return either the left or right position of that substring. The output is constrained to be a substring of the input. Listing 2 shows the executable semantics of position logic. Example 4.3. Consider exercises in which a specific part of a sentence has to be underlined or emphasized. Mapping the sentence to the correct word is not sufficient, the position of the word is required to build the correct output string. bool SemanticMatch(string x, string[] P, string[] N) { p = Map(\p \mapsto new Tuple\{ string, string\}(\p, "True"), P); n = Map(\n \mapsto new Tuple\{ string, string\}(\n, "False"), N); string answer = QA(p + n, x); return answer == "1"; } Listing 3. Executable semantics of the semantic condition function. <table> <thead> <tr> <th>v1</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>Dogs are great.</td> <td>Dogs \textbf{are} great.</td> </tr> <tr> <td>I love dogs.</td> <td>I \textbf{love} dogs.</td> </tr> <tr> <td>The dog barked really loud.</td> <td>The dog \textbf{barked} really loud.</td> </tr> </tbody> </table> The following transformation program uses the semantic positioning logic to build the output from five parts. Four different SemPos invocations are used, but they only require one call to the semantic model by caching the results. In the query, we denote the string in row i and column v1 by Ii. \[ \text{SubStr(v1, std.Pair(AbsPos(0), SemPos(v1, Q, "L")))} \\ \circ \text{ConstStr(" \\ \text{are")} \\ \circ \text{SubStr(v1, std.Pair(SemPos(v1, Q, "L"), SemPos(v1, Q, "R")))} \\ \circ \text{ConstStr("")} \\ \circ \text{SubStr(v1, std.Pair(SemPos(v1, Q, "R"), AbsPos(-1)))} \\ Q = [(I1, "are"), (I2, "love"), (I3, "barked")] \] **Example 4.4.** Conversely, we can also start from a sentence and generate exercises. This requires both semantic positions and mapping. <table> <thead> <tr> <th>Input v1</th> <th>Output o</th> </tr> </thead> <tbody> <tr> <td>A bird is smaller than a dog.</td> <td>A bird is (smaller/smallest) than a dog.</td> </tr> <tr> <td>He had the worst cold ever.</td> <td>He had the (worse/worst) cold ever.</td> </tr> <tr> <td>Jogging is faster than walking.</td> <td>Jogging is (faster/fastest) than walking.</td> </tr> </tbody> </table> The following program uses semantic position logic on the adjective to allow the output to be composed from parts of the input and interjecting constants, and a semantic map to obtain the comparative and superlative forms of the adjective. \[ \text{SubStr(v1, std.Pair(AbsPos(0), SemPos(v1, Q1, "L")))} \\ \circ \text{ConstStr("")} \\ \circ \text{SubStr(v1, std.Pair(SemPos(v1, Q1, "L"), SemPos(v1, Q1, "R")))} \\ \circ \text{ConstStr("/") \circ SemanticMap(v1, Q2) \circ ConstStr("")} \\ \circ \text{SubStr(v1, std.Pair(SemPos(v1, Q1, "R"), AbsPos(-1)))} \\ Q1 = [(I1, "smaller"), (I2, "worse"), (I3, "faster")] Q2 = [(I1, "smallest"), (I2, "worst"), (I3, "fastest")] \] **4.3 Conditions** The top-level statement decides which expression to use for constructing the output. Classically, this was done by learning a pattern based on regular expressions that some column must satisfy. Given a set of positive and negative examples, we use GPT-3 to learn how to classify them by mapping them to "True" or "False". The semantics are shown in Listing 3. Example 4.5. Consider a dataset of athletes and game scores, but the sport itself was lost. We need to distinguish whether a player makes goals or points by deciding their sport. Rather than explicitly mapping an athlete to a sport, GPT-3 implicitly learns to make the distinction. <table> <thead> <tr> <th>v₁</th> <th>v₂</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>Christiano Ronaldo</td> <td>1026</td> <td>1026 goals</td> </tr> <tr> <td>Lebron James</td> <td>34852</td> <td>34852 points</td> </tr> <tr> <td>Lionel Messi</td> <td>711</td> <td>711 goals</td> </tr> </tbody> </table> The following transformation program uses the semantic conditional to distinguish whether the "goals" or "points" constant should be used. \[ \text{std.ITE(}\text{SemanticMatch}(v₁, P, Q), \\ \quad v₂ \circ \text{ConstStr("goals"), } v₂ \circ \text{ConstStr("points")}) \] \[P = ["Christiano Ronaldo", "Lionel Messi"]\] \[N = ["Lebron James"]\] Example 4.6. In this classical example, the goal is to extract the month from dates. Depending on localization, however, the month is in a different location. The third format can be distinguished using only syntax, but the first two rows require a semantic condition to decide whether to use the American standard or not. <table> <thead> <tr> <th>v₁</th> <th>v₂</th> <th>Output</th> </tr> </thead> <tbody> <tr> <td>Chicago</td> <td>01/02/1990</td> <td>01</td> </tr> <tr> <td>Brussels</td> <td>01/02/1991</td> <td>02</td> </tr> <tr> <td>Beijing</td> <td>1992-03-02</td> <td>03</td> </tr> </tbody> </table> \[ \text{std.ITE(}\text{SemanticMatch}(v₁, P, N), \\ \quad \text{SubStr}(v₂, \text{std.Pair(AbsPos(0), AbsPos(2)))), \\ \quad \text{std.ITE(}\text{Match}(v₂, "-", 1), \\ \quad \quad \text{SubStr}(v₂, \text{std.Pair(AbsPos(5), AbsPos(7)))), \\ \quad \quad \quad \text{SubStr}(v₂, \text{std.Pair(AbsPos(3), AbsPos(5))))}) \] \[P = ["Chicago"]\] \[N = ["Brussels", "Beijing"]\] 5 Learning Semantic Operators Learning semantic operators boils down to selecting the right data to build the prompt, which is done by implementing their witness functions. Recall that witness functions are used to determine a specification on the parameters of an operator, given a specification of the operator. In other words, if an operator \( f(x₁, x₂) \) must satisfy a specification \( \varphi \), the witness functions \( w₁ \) and \( w₂ \) for the arguments \( x₁ \) and \( x₂ \) must determine new specifications \( \varphi₁ \) and \( \varphi₂ \) that the arguments must satisfy for this to be true. The deductive backpropagation algorithm uses these witness functions to recursively break down a synthesis problem into smaller synthesis problems. The intuition behind our integration is using the semantic model to solve these subproblems that cannot be solved in any other way. There are two main challenges; given the specification, the witness function is unable to determine (i) whether syntactic operators are able to solve this specification or not, and (ii) whether GPT-3 is able to solve any given problem without performing a lot of queries, as performing queries is both slow and expensive. In order to solve both challenges, we consider the model to be an oracle that always gives the correct answer during training. Ranking is used to select programs that perform few different calls to the model and in which the smallest number of output characters is obtained through semantic operators. We refer this technique as deferred query execution. All semantic operators have an argument \( x \) that corresponds to the input that should be mapped. For example, in a spreadsheet context, it is one of the input columns. Witness functions are conditional on the selected input, which is taken care of by the DSL in which the operator is integrated. For example, in the spreadsheet context and \( L_{SFF} \), the \texttt{let} statement selects a column and the witness function is given only this column. When there are multiple inputs, selecting the (most likely) correct input can be done through ranking. In the following sections, all specs are a conjunction of \( x \mapsto V \) with \( x \) the input string and \( V = \{v_1, \ldots, v_n\} \) the atoms in a disjunction, all of the return type of the operator. For example, in an outer specification for \texttt{SemMap}, \( V \) is a set of strings. As all semantic operators are terminal, their witness functions return a list of possible values for the argument that they witness, instead of a mapping from states to values—these values should hold for all states. ### 5.1 Learning maps In order to learn a semantic map, we need to learn the query given a set of disjunctive specifications over strings. When each disjunction consists of a single element, the witness function is trivial— simply map the input to each of these strings. When multiple options are possible, however, it becomes more challenging. In order to keep the witness function complete, all combinations of queries obtained by picking one option from each disjunction have to be considered. If the semantic map is required closer to the root of the target program, other operators depend on the exact query to build the rest of the output. For instance, if the semantic map is required in the first argument of a \texttt{Concat} statement, for each possible query, a new branch is started, which can again contain semantic maps. **Example 5.1.** Consider this spec obtained from the witness of the first argument of \texttt{Concat} in \( L_{SFF} \). \[ \begin{align*} \text{Japan} & \mapsto "¥" \lor "¥" \lor "¥ 10" \\ \text{France} & \mapsto "€" \lor "€" \lor "€ 20" \end{align*} \] Three out of nine possible queries are \[ \begin{align*} [("Japan", "¥"), ("France", "€")] [("Japan", "¥ 10"), ("France", "€")] [("Japan", "¥"), ("France", "€ ")] \end{align*} \] and it is impossible to know which one is correct. If this semantic map is the first argument of a \texttt{Concat} statement and the second query is chosen, the spec sent to the second argument is \[ \sigma_1 \mapsto "" \land \sigma_2 \mapsto " 20" \] and in a similar way, nine branches are started for the second argument. In order to minimize the number of possible queries and make the learning tractable, we sacrifice completeness for speed and perform greedy clustering over the possibilities from each disjunctive spec. Each cluster contains exactly one string from every disjunct. The spec with highest number \( k \) of possibilities is taken as a reference and used to initialise \( k \) clusters. From all other specs, \( k \) possibilities are greedily assigned to each cluster using a similarity function between two strings. The greedy clustering algorithm and witness function are shown in Algorithm 3. **Example 5.2.** In the running example, using syntactic similarity based on occurrence of tokens, we get three clusters \{\{"¥", "€"\}, \{"¥", "€ "\} and \{"¥ 10"", "€ 20"\}. Algorithm 3 Learning the semantic mapping query. **Require:** Similarity function $S : \text{string} \times \text{string} \rightarrow \mathbb{R}$ 1: function GreedyCluster($Y$) 2: reference $\leftarrow$ longest in $Y$ 3: $C \leftarrow \{[e] \mid e \in \text{reference}\}$ \text{▷ Initialize clusters with reference} 4: for $Y \neq \text{reference} \in Y$ do 5: $C' \leftarrow C$ \text{▷ Make shallow copy of clusters} 6: while $C' \neq \emptyset$ do 7: $e^*, C^* \leftarrow \text{arg max}_{e \in Y, C \in C'} S(e, C[0])$ \text{▷ Unassigned element closest to reference} 8: append $e^*$ to $C^*$ 9: remove $C^*$ from $C'$ 10: return $C$ 11: function WitnessMapQ($\varphi$) 12: options $\leftarrow \{Y \mid x \leadsto Y \in \varphi\}$ 13: clusters $\leftarrow$ GreedyCluster($Y$) 14: return MakeQueries(clusters, $\varphi$) \text{▷ Map states to elements from clusters} 5.2 Learning position logics Learning the query to extract a position starts from a set of disjunctions over the positions to extract. The direction (left or right) is given. Let $p$ be a position and $s$ the string. Depending on direction, we generate strings $s[p : p + j]$ or $s[p - j : p]$ for increasing $j$ as candidates values for the query. Instead of all $j$, we only select interesting candidates for $j$ by tokenizing the string $s[p : p + j]$ or $s[p - j : p]$ with a tokenizer that extracts interesting positions, for example, on word boundaries. **Example 5.3.** Given the spec "He wanted to eat pizza." $\leadsto 3$ and direction left, the string "wanted to eat pizza." is tokenized into ["wanted", "to", "eat", "pizza"] and possible $j$ are [6, 9, 13, 20]. Three candidate values for the query are "wanted", "wanted to" and "wanted to eat". Even with low values of $j$, the number of different queries quickly increases. The same greedy clustering approach used to learn maps is also used to select a subset of promising queries. The witness function for learning a left position is shown in Algorithm 4. Learning a right position is almost identical, with line 2 selecting sides $x[p : p]$ and line 3 using a function that extracts interesting right positions. Algorithm 4 Learning a position query for left direction. **Require:** tokenizing function $\text{LeftPositions} : \text{string} \rightarrow \text{int}[]$ 1: function WitnessPosQueryLeft($\varphi$) 2: sides $\leftarrow \{x \rightarrow \{x[p :] \mid p \in V\} \mid x \leadsto V \in \varphi\}$ 3: tokens $\leftarrow \{x \rightarrow \{\text{LeftPositions}(s) \mid s \in S\} \mid (x \rightarrow S) \in \text{sides}\}$ 4: candidates $\leftarrow \{x \rightarrow \sum t[j] \mid (x \rightarrow t) \in \text{tokens}, t \in T, j \in [1, |t|]\}$ 5: clusters $\leftarrow$ GreedyCluster(candidates) 6: return MakeQueries(clusters, $\varphi$) 5.3 Learning conditions The witness for learning conditions is given a conjunction of $\sigma \leadsto B$ specs. All inputs mapped to true correspond to positive examples and vice versa. In learning conditions, the main challenge... is knowing when to learn a different program. The FlashFill paper [Gulwani 2011] describes a procedure based a partitioning of the input with (i) each partition having a program that is consistent with the specification for that partition and (ii) having the fewest number of partitions. Because of deferred querying and SemMap acting like an oracle during learning, it will always yield a program that is consistent with any partitioning of the input. We assume that semantic conditions are only used to distinguish otherwise syntactic programs. Both examples shown in Section 4.3 satisfy this assumption. 5.4 Ranking After learning programs, we rank them with respect to the following criteria; (i) rely as much as possible on syntactic operators and (ii) having as few distinct queries to the model as possible. Because semantic operators are considered oracles during learning, the synthesizer can use them to solve any subproblems with the appropriate type, but we only want to use them for subproblems that cannot be solved syntactically. Example 5.4. Consider a problem "Dogs are great." → "Dogs \textbf{are} great." that requires extracting the predicate. We need to learn two semantic positions (5 and 8) that are used in three substring operators. When learning the first semantic position, given the left direction, two possible outputs for queries are "are" and "are great". Similarly, for the second position and given right direction, two options are "Dogs are" and "are". For all combinations of queries, a valid program will be learned, but only by selecting "are" will the query perform exactly the desired task (first criterion). Note that the constant "\textbf{"} and other parts of the output can also be the result of a semantic map, hence the goal to rely as much as possible on syntactic operators (second criterion). To support easily integrating semantic operators, we want the ranking to be as independent as possible from the ranking of other operators. This independence is achieved by assigning semantic operators a score of 1 during the hierarchical ranking, with the goal of having a minimal influence in both additive and multiplicative aggregation of ranks. After ranking, programs are re-ranked based on semantic operators. Map queries are punished based on the number of characters that they are expected to output. Let $Q$ be the list of input-output examples and $\#c = \sum_{(i, o) \in Q} |o|$ the number of characters obtained through this query. We add a bonus score for not requiring characters through semantics maps $S_m/\#c$ to the original score. Position and condition queries are punished based on the number of distinct position queries. For example, extracting the left and right side of the same word only requires a single query. A second bonus score $S_q/\#q$ is added to the final score, with $\#q$ the number of distinct queries required by semantic position and condition operators. 6 EVALUATION We perform experiments to answer the following questions. - **Q1** Is a combination of syntactic and semantic parsing required? - **Q2** Do we need deferred execution of queries and greedy clustering to quickly learn programs? - **Q3** Case study: can we generate descriptive names for semantic operators? - **Q4** Case study: can we easily use semantic operators in other domains? 6.1 Environment 6.1.1 Implementation. Starting from a string transformation DSL on the PROSE website, we added the position and map operators in a prototype implementation called FlashGPT3. A syntactic similarity measure was used in the greedy clustering for possible queries. We compute the number of occurrences of tokens to generate a feature vector and compute a similarity as the cosine similarity between these vectors. Tokens are lowercase words, uppercase words, camel case words, numbers and a list of specific symbols.\footnote{2,.;'+*_!()"\'/?%@#$[\]{}<>=} 6.1.2 Benchmark Suite. Three types of benchmark problems are collected. The first are head cases used to evaluate basic capabilities that we believe a semantic transformation synthesizer should offer. This type of head cases was also used to evaluate TDE [He et al. 2018]. The second are examples related to the language learning domain and take inspiration from course notes on English grammar and websites with worksheets.\footnote{3\url{www.perfect-english-grammar.com/grammar-exercises.html} \url{www.agendaweb.org/grammar-exercises.html} \url{www.englisch-hilfen.de/en/exercises_list/alle_grammar.htm} \url{www.english-4u.de/grammar_exercises.htm}} Together, these make up a collection of 30 diverse and challenging problems that require a combination of syntactic and semantic parsing. Finally, we use a subset of 30 of the internal FlashFill benchmark to evaluate the syntactic capabilities of GPT-3. 6.1.3 Model and hyperparameters. We use the largest davinci model with 175 billion parameters and set the temperature parameter, which roughly determines the level of creativity of the generated output, to 0.\footnote{7 As used in the demonstration on QA on the OpenAI website (https://beta.openai.com/examples/default-qa).} All experiments were performed on a laptop. 6.1.4 Inference time. The run-time of FlashGPT3 programs on new inputs is heavily dominated by calls to the GPT-3 API. Across all experiments an average, we have reported an average query time of 462 ± 271 ms. Asynchronous execution speeds this process up for multiple rows, as the overhead is largely caused by network overhead. We have found that performing more than four concurrent invocations is met with rate limiting. This limitation stems from the fact that a single endpoint is responsible for serving all GPT-3 calls in the world. Commercially shipping FlashGPT3 is then possible through a dedicated endpoint that does not limit concurrent requests. 6.2 Combining syntactic and semantic operators As GPT-3 is able to perform syntactic manipulations, we start by evaluating whether it is required to combine it with syntactic inductive programming or not, and argue why such an integration is relevant regardless of GPT-3 being able to solve some problems on its own. 6.2.1 Experimental Setup. Our evaluation takes the first $n$ examples to learn programs, and uses the top-ranked program to try solving all remaining examples. Once a program is obtained that solves all remaining cases, execution is stopped. Experiments with only GPT-3 are performed by a DSL that only contains the SemMap operator. We use the version of FlashFill shipped with the PROSE SDK [Microsoft 2015]. 6.2.2 Results. Figure 8a shows the number of examples that FlashFill and GPT-3 require on the syntactic benchmarks. When the number of examples equals the total number of examples, GPT-3 fails. The result indicates that GPT-3 is able to solve some problems, but requires significantly more examples. Its main weaknesses are tokenization and complex substring extraction logic. A problem as simple as extracting the first 4 letters of a word (“Alakazam” → “Alak”) is not solved after 7 examples. Similarly, extracting the penultimate word from a path specification (“path/to/file” → “to”) proves hard for the language model. Figure 8b shows the number of examples that FlashGPT-3 and GPT-3 require on the semantic benchmarks. FlashGPT3 consistently outperforms GPT-3 on problems that require more complex (a) Solving FlashFill benchmarks with GPT-3. Eight problems (26%) were not solved by GPT-3. Those problems that are solved by GPT-3 require significantly more examples than FlashFill. It is clear that a strong, syntactic synthesizer is required to automate repetitive transformation tasks. (b) Results on the mixed syntactic and semantic benchmark problems. Six problems (20%) were not solved by GPT-3. On half of those, FlashGPT3 requires barely two examples. On others, the bottleneck is GPT-3 requiring more examples to solve the semantic subproblems. Fig. 8. Results on syntactic and mixed benchmarks. syntactic parsing. Aside from taking care of the syntactic part, deductive backpropagation has the advantage of generating smaller, more targeted problems for GPT-3. For instance, explicitly obtaining the infinitive of a verb ("were" → "be") is easier than requiring this transformation as part of a larger problem ("were" → "data/be.mp3"). Whereas generally a blessing, in some cases, these smaller subproblems sometimes lack enough context for GPT-3 to learn the task. For instance, consider the problem of converting 24-hour to 12-hour notation ("22:00" → "10:00 PM"). FlashGPT3 breaks this down into two semantic subproblems "22:00" → "10" and "22:00" → "PM" as the space is considered a constant. The context of this task is important, as it takes FlashGPT3 one more example to solve the first of these problems as opposed to the whole problem at once (6 versus 5). Note that the program using only a single map is also discovered during synthesis. Cross-validating programs during ranking allows to trade off performing more queries for requiring fewer examples. 6.3 Deferred execution and clustering After recursively solving the disjunctive specs from the witness functions of an operator, deductive backpropagation performs a soundness check on these arguments by executing the operator on these arguments, before witness functions dependent on the result of this operator are invoked. We evaluate whether deferring the execution of queries until after ranking and clustering queries is required to quickly learn programs or not. During evaluation, the syntactic guarantees of learning a program with FlashGPT3 allowed us to correct syntactic mistakes in the benchmark, such as trailing or missing spaces. Despite showing decent performance on some syntactic problems, these kind of syntactic guarantees are unavailable when relying only on GPT-3. Describe the relation between the following items. Belgium | Brussels => capital Pizza is delicious. | Pizza => subject Lionel Messi | Football => sport China | Asia => continent Like a Prayer | Madonna => artist \[ x \mid y \] => Listing 4. Prompt to extract the name of a relation, where \( x \) and \( y \) are placeholders to be substituted with the input and output of an example from a query. 6.3.1 Experimental setup. We run all experiments with greedy clustering replaced by the Cartesian product over all possible queries. Learning is timed out after five minutes. During learning, we count how often a query would have been made by the synthesizer. 6.3.2 Results. Figure 9a shows the total time taken to learn the correct program. Bars that do not fit on the plot are instances where learning timed out. Without clustering, that happens for 11 problems. With clustering, most programs are learned in less than a second. Only a few problems, involving long sentences and requiring more examples to be learned, take slightly longer, but are still learned in less than three seconds. Using cheap language models to improve the clustering step on these instances can still improve performance. Figure 9b shows the number of times the semantic operators were invoked during learning, both with and without clustering. In other words, this plot shows the number of calls not made to the model by having these operators act as oracles during learning. They are divided over two plots for clarity, based on whether learning without clustering timed out or not. Without clustering, the number of calls is prohibitively high. Even with clustering, however, the number of calls quickly grows to tens of thousands for complex programs that require more examples. Such calls are both slow and expensive, and learning will still be slow for all but the smallest problems. Using deferred query execution, the number of calls drops to zero and programs are learned quickly. 6.4 Case study: renaming semantic operators Running examples in this paper use descriptive names for semantic operators, but the actual operators are anonymous and represented by a query. In this case study, we explore using GPT-3 to rename semantic operators with descriptive names based on the examples in these queries. Example 6.1. A transformation \( \text{SemMap}(x, [("UK", "£"), ("Japan", "¥")]) \) is not very readable. Using GPT-3, we can rename this to \( \text{getCurrency}(x) \). Listing 4 shows a prompt in which each examples describe the name of a relation between two concepts. If the two placeholders \( x \) and \( y \) are replaced with one of the input-output example in a query, it hopefully returns a descriptive name for the operator of that query. Rather than only the best completion, we ask for the top-\( k \) completions and rank them by how often they occur. The temperature is set to 0.8 to obtain a greater variety of possible names. Example 6.2. Setting \( x = "UK" \) and \( y = "£" \), the top-10 results are currency, currency, currency, currency, currency, currency, currency, currency and country. Table 1 shows the names obtained using this method for some of the examples in our evaluation, for queries performed by FlashGPT3 and GPT-3. Names on the more specific FlashGPT3 queries are more accurate when compared to using GPT-3, which attempts to solve the whole problem at once. Table 1. Using GPT-3 to generate names for semantic operators. We write \( \cdots \) for parts of sentences that are omitted for brevity. Because it generates concrete subproblems, names for FlashGPT3 are more accurate. <table> <thead> <tr> <th>Input</th> <th>FlashGPT3</th> <th>GPT-3</th> </tr> </thead> <tbody> <tr> <td>were</td> <td>be</td> <td>data/be.mp3</td> </tr> <tr> <td>May 2, 1953</td> <td>mai month</td> <td>29 mai 1953 date</td> </tr> <tr> <td>How many mice does your cat catch?</td> <td>mouse subject</td> <td>( \cdots ) (mouse) subject-verb</td> </tr> <tr> <td>Guernica</td> <td>Picasso painter</td> <td>Picasso’s Guernica artwork</td> </tr> <tr> <td>1984</td> <td>Orwell writer</td> <td>Orwell’s 1984 book</td> </tr> <tr> <td>PRG</td> <td>Prague airport</td> <td>Departure from Prague (PRG) airport code</td> </tr> </tbody> </table> 6.5 Case study: String profiling This section presents a case study where we use our semantic operators to perform semantic string profiling. Recall that for the profiling task in Figure 5b (Section 2.3), we wanted to learn the following semantic profile \[ \text{"iPhone 11 "} \circ \text{Digit}+ \circ \text{"GB - "} \circ \text{matchColor} \circ \text{matchCarrier} \] that represents a concatenation of atoms. We extend FlashProfile with a \text{SemPos} atom that finds the next ending position of a semantic concept. **Example 6.3.** The \text{matchColor} atom can be represented by \text{SemPos}(x, Q, "R") with \[ Q = [\text{("Red AT&T", "Red")}, \text{("Space Gray AT&T", "Space Gray")}]. \] To add an atom to FlashProfile, we need a function that takes a set of strings \( S \) and returns a set of compatible atoms with the prefixes of those strings. This is achieved by creating a disjunctive spec that maps each string to all possible locations and then uses the witness for \text{SemPos} with a semantic similarity measure, for example, cosine similarity between embeddings [Mikolov et al. 2013]. Finally, we select only the cluster with the highest inter-cluster similarity. **Example 6.4.** For the leading example on profiling, we generate the following specs. \[ \text{"Red AT&T"} \leadsto 3 \cup 6 \cup 8 \\ \text{"Midnight Green Verizon"} \leadsto 8 \cup 14 \cup 22 \\ \text{"Space Gray Unlocked"} \leadsto 5 \cup 10 \cup 19 \] If we compute the similarity between strings as the cosine similarity between their average word embedding, the following clusters are obtained with the \text{GreedyCluster} algorithm. \[ \{\text{"Red"}, \text{"Midnight Green"}, \text{"Space Gray"}\} \\ \{\text{"Red AT&T"}, \text{"Midnight Green"}, \text{"Space Gray"}\} \\ \{\text{"Red AT&T"}, \text{"Midnight Green"}, \text{"Space Gray"}\} \] The first cluster achieves the highest inter-cluster similarity and is selected to build the atom. After we find the pattern for the colors, we can perform a similar step for the carriers. Notice that these \text{SemPos} atoms are generic, however, and in order to be useful to users, they are ideally given a descriptive name. Using the query from Section 6.4, our system is able to do exactly that. 7 RELATED WORK **Inductive program synthesis.** Learning to write programs from demonstrations has been a popular research area for a long time [Cypher and Halbert 1993]. After the success of FlashFill, learning string transformation programs has become one of the most popular domains in this area [Gulwani 2011]. Later, partial examples were shown sufficient to learn extraction programs by FlashExtract [Le and Gulwani 2014]. The FlashMeta framework generalizes the deductive backpropagation algorithm behind FlashFill and FlashExtract to a unified framework that significantly reduces the effort required to develop industrial synthesizers [Polozov and Gulwani 2015]. Other successful applications of this technology are predictive synthesis, in which no output is given at all, for text splitting [Raza and Gulwani 2017] and modeless synthesis, in which the system watches a user and generates its own examples, for suggesting refactoring operations [Miltner et al. 2019]. **Neural program synthesis.** In earlier approaches to neuro-symbolic program synthesis, neural networks were used to guide [Balog et al. 2019; Ellis et al. 2021] or replace [Devlin et al. 2017; Parisotto et al. 2016] the search over a given DSL. There, the goal is to allow longer programs to be learned over possibly noisy inputs, but the scope of problems that can be solved remains limited to purely syntactic ones. A limited level of semantic capabilities was achieved by leveraging APIs to transform data and using a neural guided search for navigating the large branching factor caused by this integration [Bhupatiraju et al. 2017]. Our integration, on the other hand, extends the DSL with neural operators that are able to learn a required task from few examples, which allows for a fast, enumerative search using deductive backpropagation and is more flexible in the scope of semantic tasks that it performs. Figure 10 compares both ways of integrating neural networks with inductive synthesis. **Language modelling.** The ability to learn vector representations of words without supervision [Mikolov et al. 2013] did not only drastically improve the downstream performance of a plethora of natural language processing (NLP) tasks, it also significantly lowered the bar for adding semantics to different applications. The challenging task of estimating the semantic similarity between words was reduced to computing a similarity between their vector representation, pre-trained versions of which are readily available to download. Ever since, language modelling has shifted towards training a general model on large amounts of unlabelled data and fine-tuning this model towards a specific task on smaller amounts of labeled data [Devlin et al. 2019]. One way of training such a general model, called generative or autoregressive pre-training, involves predicting the next token when given a short piece of text. With the ever increasing size of these models, from 117M parameters in the original GPT model [Radford et al. 2018], to 1.5B parameters in GPT-2 [Radford et al. 2019], to 175B parameters in GPT-3 [Brown et al. 2020], the question has arisen of how much knowledge is stored in these parameters [Petroni et al. 2019]. Prompt engineering. It has been shown that the prompt format used to extract information from GPT-3 has a significant influence on the performance [Zhao et al. 2021]. This task of constructing good prompts is called prompt engineering. Recent research has focused on determining what constitutes good examples for question answering [Liu et al. 2021a] and how to rewrite prompts to be better for natural language understanding [Liu et al. 2021b]. Semantics in program synthesis. With the increasing availability of large code bases and corpora of web tables, it was only a matter of time until these would be integrated with inductive synthesis. InfoGather [Yakout et al. 2012] and the first DataXFormer [Abedjan et al. 2016] extract and match information contained in web tables for data augmentation and transformation. Later versions of DataXFormer complement web table data with information from knowledge graphs and web forms. The data transformations are limited to lookup in tables, without PBE component, and it therefore requires both input and output to be explicitly present in the tables. Transform-data-by-example (TDE) uses functions from code bases and web forms to allow semantic operations in inductive synthesis [He et al. 2018]. As opposed to our framework, the synthesis algorithm has to be highly tailored towards using these external sources and is limited to string → string transformations. Correctness in program synthesis. Examples are an under-specified format of user intent in program synthesis [Gulwani et al. 2017] and PBE systems are typically not able to guarantee correctness. As opposed to neural networks, which also rarely provide guarantees on their output, synthesized programs can still be validated by users. In this regard, our approach is slightly better than raw neural networks, as the output program conforms to a DSL, but worse than traditional PBE systems, because the program may contain black-box neural operators. This may not matter in practise, however, as real-world synthesizers such as FlashFill [Gulwani 2011] and Blue-Pencil [Miltner et al. 2019] do not expose learned programs to the users, as they can be complex and written in a DSL that a user might not be familiar with. Instead, the output is presented to the user for validation. If the number of non-exemplar rows is too large to be validated manually, we can use the technique proposed in [Mayer et al. 2015], where users only need to focus on rows where the outputs of top-rank programs are different. 8 CONCLUDING REMARKS This paper introduces a novel integration two popular technologies: inductive program synthesis and autoregressive language models with few-shot learning capabilities. We formalize three semantic operators, powered by the language model, that enable tasks involving language understanding and general knowledge, and describe procedures for learning them using deductive backpropagation. These operators can be easily integrated in DSLs for different tasks, such as string transformations and profiling. We show that a combination of syntactic string processing and semantic operators allows the automation of repetitive tasks that involve lookup and natural language understanding from a few examples. In this evaluation, we show that having these operators act as oracles during learning and pruning the set of candidate operators is required to learn these programs quickly. Additionally, we show that the operator semantics and learning can be easily integrated in existing DSLs with a case study on string profiling. The ideas introduced in this paper suggest several interesting directions for future work. Cheaper language models may be used to improve witness functions and ranking. Specifically, models that allow semantic similarity computations may remove syntactic limitations that stem from clustering. PBE systems are generally sensitive to noise, as they have to learn an exact program from very few examples. In the presence of noise, when a syntactic program is not found, FlashGPT3 will default to semantic operators, which might be resilient to some levels of noise. Finally, we plan to extend this integration to different domains. Most notably, advances in the domains of semantic refactoring and data extraction may quickly lead to commercial adaptation. ACKNOWLEDGEMENTS We would like to thank Luc De Raedt for his valuable feedback during the early stages of this research, as well as our anonymous reviewers for their insightful remarks that helped us improve the quality of this work. This work has received funding from the European Research Council (ERC) under the European Union’s Horizon 2020 research and innovation programme (grant agreement No [694980] SYNTH: Synthesising Inductive Data Models). This research received funding from the Flemish Government (AI Research Program). REFERENCES J. Devlin, Jonathan Uesato, Surya Bhupatiraju, Rishabh Singh, Abdel rahman Mohamed, and P. Kohli. 2017. RobustFill: Neural Program Learning under Noisy I/O. In ICML.
{"Source-Url": "https://www.vuminhle.com/pdf/oopsla21-semantic-pbe.pdf", "len_cl100k_base": 15682, "olmocr-version": "0.1.53", "pdf-total-pages": 25, "total-fallback-pages": 0, "total-input-tokens": 71785, "total-output-tokens": 20205, "length": "2e13", "weborganizer": {"__label__adult": 0.0004010200500488281, "__label__art_design": 0.0005478858947753906, "__label__crime_law": 0.0002713203430175781, "__label__education_jobs": 0.0020294189453125, "__label__entertainment": 0.00014543533325195312, "__label__fashion_beauty": 0.00018787384033203125, "__label__finance_business": 0.0002300739288330078, "__label__food_dining": 0.00032711029052734375, "__label__games": 0.0008153915405273438, "__label__hardware": 0.0006923675537109375, "__label__health": 0.00038361549377441406, "__label__history": 0.0002701282501220703, "__label__home_hobbies": 0.00011032819747924803, "__label__industrial": 0.000324249267578125, "__label__literature": 0.0008912086486816406, "__label__politics": 0.0002357959747314453, "__label__religion": 0.00047397613525390625, "__label__science_tech": 0.036285400390625, "__label__social_life": 0.000118255615234375, "__label__software": 0.01123809814453125, "__label__software_dev": 0.943359375, "__label__sports_fitness": 0.0002363920211791992, "__label__transportation": 0.0004546642303466797, "__label__travel": 0.00016689300537109375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 75267, 0.03715]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 75267, 0.71546]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 75267, 0.86145]], "google_gemma-3-12b-it_contains_pii": [[0, 3116, false], [3116, 6335, null], [6335, 9896, null], [9896, 12555, null], [12555, 15478, null], [15478, 18499, null], [18499, 22563, null], [22563, 24245, null], [24245, 26819, null], [26819, 29206, null], [29206, 31968, null], [31968, 35116, null], [35116, 38726, null], [38726, 41746, null], [41746, 45389, null], [45389, 49133, null], [49133, 50812, null], [50812, 51602, null], [51602, 55004, null], [55004, 57890, null], [57890, 60763, null], [60763, 64871, null], [64871, 69431, null], [69431, 74706, null], [74706, 75267, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3116, true], [3116, 6335, null], [6335, 9896, null], [9896, 12555, null], [12555, 15478, null], [15478, 18499, null], [18499, 22563, null], [22563, 24245, null], [24245, 26819, null], [26819, 29206, null], [29206, 31968, null], [31968, 35116, null], [35116, 38726, null], [38726, 41746, null], [41746, 45389, null], [45389, 49133, null], [49133, 50812, null], [50812, 51602, null], [51602, 55004, null], [55004, 57890, null], [57890, 60763, null], [60763, 64871, null], [64871, 69431, null], [69431, 74706, null], [74706, 75267, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 75267, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 75267, null]], "pdf_page_numbers": [[0, 3116, 1], [3116, 6335, 2], [6335, 9896, 3], [9896, 12555, 4], [12555, 15478, 5], [15478, 18499, 6], [18499, 22563, 7], [22563, 24245, 8], [24245, 26819, 9], [26819, 29206, 10], [29206, 31968, 11], [31968, 35116, 12], [35116, 38726, 13], [38726, 41746, 14], [41746, 45389, 15], [45389, 49133, 16], [49133, 50812, 17], [50812, 51602, 18], [51602, 55004, 19], [55004, 57890, 20], [57890, 60763, 21], [60763, 64871, 22], [64871, 69431, 23], [69431, 74706, 24], [74706, 75267, 25]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 75267, 0.10352]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
413b5f05db9fb137a03dcc656a60ae7e400b14ad
[REMOVED]
{"Source-Url": "https://inria.hal.science/hal-01162898/file/main.pdf", "len_cl100k_base": 13585, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 50539, "total-output-tokens": 16298, "length": "2e13", "weborganizer": {"__label__adult": 0.0003879070281982422, "__label__art_design": 0.00030994415283203125, "__label__crime_law": 0.0003380775451660156, "__label__education_jobs": 0.0008025169372558594, "__label__entertainment": 6.020069122314453e-05, "__label__fashion_beauty": 0.00015616416931152344, "__label__finance_business": 0.00017595291137695312, "__label__food_dining": 0.00039839744567871094, "__label__games": 0.0006351470947265625, "__label__hardware": 0.0005974769592285156, "__label__health": 0.00047135353088378906, "__label__history": 0.00022077560424804688, "__label__home_hobbies": 9.971857070922852e-05, "__label__industrial": 0.00035071372985839844, "__label__literature": 0.0003066062927246094, "__label__politics": 0.00028586387634277344, "__label__religion": 0.0004973411560058594, "__label__science_tech": 0.01300048828125, "__label__social_life": 0.00011426210403442384, "__label__software": 0.0038509368896484375, "__label__software_dev": 0.9755859375, "__label__sports_fitness": 0.0003230571746826172, "__label__transportation": 0.0005803108215332031, "__label__travel": 0.00022280216217041016}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61422, 0.02183]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61422, 0.40821]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61422, 0.85013]], "google_gemma-3-12b-it_contains_pii": [[0, 1040, false], [1040, 3714, null], [3714, 7708, null], [7708, 10964, null], [10964, 15103, null], [15103, 18628, null], [18628, 22997, null], [22997, 26130, null], [26130, 29779, null], [29779, 33260, null], [33260, 35970, null], [35970, 39877, null], [39877, 43337, null], [43337, 47115, null], [47115, 50795, null], [50795, 54515, null], [54515, 57637, null], [57637, 60762, null], [60762, 61422, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1040, true], [1040, 3714, null], [3714, 7708, null], [7708, 10964, null], [10964, 15103, null], [15103, 18628, null], [18628, 22997, null], [22997, 26130, null], [26130, 29779, null], [29779, 33260, null], [33260, 35970, null], [35970, 39877, null], [39877, 43337, null], [43337, 47115, null], [47115, 50795, null], [50795, 54515, null], [54515, 57637, null], [57637, 60762, null], [60762, 61422, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61422, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61422, null]], "pdf_page_numbers": [[0, 1040, 1], [1040, 3714, 2], [3714, 7708, 3], [7708, 10964, 4], [10964, 15103, 5], [15103, 18628, 6], [18628, 22997, 7], [22997, 26130, 8], [26130, 29779, 9], [29779, 33260, 10], [33260, 35970, 11], [35970, 39877, 12], [39877, 43337, 13], [43337, 47115, 14], [47115, 50795, 15], [50795, 54515, 16], [54515, 57637, 17], [57637, 60762, 18], [60762, 61422, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61422, 0.0614]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
153f882512c8f7fa295875797992a7478c7a9dd1
Detecting Deadlock in Programs with Data-Centric Synchronization Daniel Marino*, Christian Hammer†, Julian Dolby‡, Mandana Vaziri‡, Frank Tip§, Jan Vitek¶ *Symantec Research Labs, USA danmarino@yahoo.com †Saarland University, Germany c.hammer@acm.org ‡IBM T.J. Watson Research Center, USA{dolby,mvaziri}@us.ibm.com §University of Waterloo, Canada ftip@uwaterloo.ca ¶Purdue University, USA jv@cs.purdue.edu Abstract—Previously, we developed a data-centric approach to concurrency control in which programmers specify synchronization constraints declaratively, by grouping shared locations into atomic sets. We implemented our ideas in a Java extension called AJ, using Java locks to implement synchronization. We proved that atomicity violations are prevented by construction, and demonstrated that realistic Java programs can be refactored into AJ without significant loss of performance. This paper presents an algorithm for detecting possible deadlock in AJ programs by ordering the locks associated with atomic sets. In our approach, a type-based static analysis is extended to handle recursive data structures by considering programmer-supplied, compiler-verified lock ordering annotations. In an evaluation of the algorithm, all 10 AJ programs under consideration were shown to be deadlock-free. One program needed 4 ordering annotations and 2 others required minor refactorings. For the remaining 7 programs, no programmer intervention of any kind was required. I. INTRODUCTION Writing concurrent programs that operate on shared memory is error-prone as it requires reasoning about the possible interleavings of threads that access shared locations. If programmers make mistakes, two kinds of software faults may occur. Data races and atomicity violations may arise when shared locations are not consistently protected by locks. Deadlock may occur as the result of undisciplined lock acquisition, preventing an application from making progress. Previously [1–3], we proposed a data-centric approach to synchronization to raise the level of abstraction in concurrent object-oriented programming and prevent concurrency-related errors. In our approach, fields of classes are grouped into atomic sets. Each atomic set has associated units of work, code fragments that preserve the consistency of their atomic sets. Our compiler inserts synchronization that is sufficient to guarantee that, for each atomic set, the associated units of work are serializable [4], thus preventing data races and atomicity violations by construction. Our previous work reported on the implementation of atomic sets as an extension of Java called AJ: we demonstrated that atomic sets enjoy low annotation overhead and that realistic Java programs can be refactored into AJ without significant loss of performance [3]. However, our previous work did not address the problem of deadlock, which may arise in AJ when two threads attempt to execute the units of work associated with different atomic sets in different orders. This paper presents a static analysis for detecting possible deadlock in AJ programs. The analysis is a variation on existing deadlock-prevention strategies [5, 6] that impose a global order on locks and check that all locks are acquired in accordance with that order. However, we benefit from the declarative nature of data-centric synchronization in AJ to infer the locks that threads may acquire: (i) all locks associated with atomic sets, and (ii) the memory locations associated with different atomic sets will be disjoint unless they are explicitly merged by the programmer. Our algorithm computes a partial order on atomic sets which is consistent with lock acquisition order. If such an order can be found, a program is deadlock-free. For programs that use recursive data structures, the approach is soundly extended to take into account a programmer-specified ordering between different instances of an atomic set. We implemented this analysis and evaluated it on 10 AJ programs. These programs were converted from Java as part of our previous work [3], and cover a range of programming styles. The analysis was able to prove all 10 programs deadlock-free. Minor refactorings were needed in 2 cases, and a total of 4 ordering annotations were needed, all in 1 program. In summary, this paper makes the following contributions: - We present a static analysis for detecting possible deadlock in AJ programs. It leverages the declarative nature of atomic sets to check that locks are acquired in a consistent order. If so, the program is guaranteed to be deadlock-free. Otherwise, possible deadlock is reported. - To handle recursive data structures, we extend AJ with ordering annotations that are enforced by a small extension of AJ’s type system. We show how these annotations are integrated with our analysis in a straightforward manner. - We implement the analysis and evaluate it on a set of 10 AJ programs. The analysis establishes deadlock-freedom of each of these, requiring minor refactorings in 2 cases. Only 4 ordering annotations were needed, in 1 program. II. DATA-CENTRIC SYNCHRONIZATION WITH AJ AJ [2] extends Java with the syntax of Fig. 1. An AJ class can have zero or more atomicset declarations. Each atomic set... has a symbolic name and intuitively corresponds to a logical lock protecting a set of memory locations. Each atomic set has associated units of work, code fragments that preserve the consistency of their associated atomic sets. These units of work are the only code permitted to access the atomic set’s fields, so only this code needs to be synchronized to ensure its consistency. By default, the units of work for an atomic set declared in a class C consist of all non-private methods in C and its subclasses. Given data-centric synchronization annotations, the AJ compiler inserts concurrency control op- erations that are sufficient to guarantee that any execution is atomic-set serializable [4], i.e., equivalent to one in which, for each atomic set, its units of work occur in some serial order. One may think of a unit of work as an atomic section [7] that is only atomic with respect to a particular set of memory locations. Accesses to locations not in the set are visible to other threads. Methods that do not operate on locations within atomic sets will not be synchronized. We illustrate the discussion with a binary tree example. Fig. 2 shows a class Tree with fields root and size; root points to the Node that is the root of the tree. Each node has left and right fields pointing to its children, as well as a value and a weight. Class Tree has methods size(), which returns the number of nodes in the tree, find(), for finding a node with a given value, and insert() for inserting a value into the tree. The latter two methods rely on methods Node.find() and Node.insert(). Tree also has methods compute(), which returns the weighted sum of its nodes’ values, and copyRoot(), which inserts the root’s value into another tree passed as an argument. We assume that the programmer wants to ensure that concurrent calls to incWeight() and compute() on the same tree never interleave, as this might trigger a race condition that causes Tree.compute() to return a stale value. We now discuss how this can be achieved in AJ. Tree declares an atomic set t (line 2). The annotations on lines 3-4 have the effect of including root and size in this atomic set. At run time, each Tree object has an atomic- set instance t containing the corresponding fields. The AJ compiler inserts locks to ensure that the units of work for t execute atomically. Preserving the consistency of complex data structures typi- cally requires protecting multiple objects (e.g., all of a Tree’s nodes) with a single lock. This can be achieved using aliasing annotations, which unify the atomic sets of a Tree and the dif- ferent Node objects into one larger atomic set. Aliasing anno- tations are type qualifiers, so the declaration Node left[n=\this.n] on line 17 specifies that the atomic set instance n of the object referenced by left is unified with that of the current object. Likewise, atomic set instance n in the Node allocated on line 5 is unified with atomic set instance t in its enclosing Tree object. AJ’s type system enforces the consistency of such aliasing annotations to prevent synchronization errors. Together, the aliasing annotations on Tree and Node ensure that all locations in a Tree object are protected by the same lock. Fig. 3(a) shows a client where two threads insert con- currently into a tree. Such operations will execute correctly, as ```java 1 class Tree { 2 atomicset t; 3 private atomic(n) Node root[n=\this.t]; 4 private atomic(n) int size = 1; 5 Tree(int v) { root=new Node(n=\this.t)(v); } 6 int size() { return size; } 7 INode find(int v) { return root.find(v); } 8 void insert(int v) { root.insert(v); size++; } 9 int compute() { return root.compute(); } 10 void copyRoot(Tree tree) { tree.insert(root.getValue()); } 11 } 12 13 interface INode { 14 public void incWeight(int n); 15 INode find(int v); 16 void insert(int v); 17 int getValue(); 18 void compute(); 19 int compute() 20 } 21 22 class Node implements INode { 23 atomicset(n); 24 private atomic(n) Node left[n=\this.n]; 25 private atomic(n) Node right[n=\this.n]; 26 private atomic(n) int value, weight = 1; 27 Node(int v) { value = v; } 28 int getValue() { value; } 29 void insert(int v) { 30 if (value==v) weight++; 31 else if (v < value) { 32 if (left==null) left = new Node(n=\this.n)(v); 33 else left.insert(v); 34 } else { 35 if (right==null) right = new Node(n=\this.n)(v); 36 else right.insert(v); 37 } 38 } 39 public void incWeight(int n) { weight += n; } 40 INode find(int v) { 41 if (value == v) return this; 42 else if (v < value) return left == null ? null : left.find(v); 43 else return right == null ? null : right.find(v); 44 } 45 int compute() { 46 int result = value * weight; 47 result += (left == null) ? 0 : left.compute(); 48 return result + (right == null) ? 0 : right.compute(); 49 } 50 } ``` Fig. 2. AJ Tree example. AJ ensures mutual exclusion. Note that the client code does not refer to atomic sets at all, as is typical in our approach. III. DEADLOCK DETECTION IN AJ A. Execution of the Example Recall that for any object o created at runtime that is of a type that declares an atomic set t, there will be an atomic set instance o.t that protects the fields in o that are declared to be in t. Atomic set instances can be thought of as resources that are acquired when an associated unit of work is executed. As we shall see shortly, deadlock may arise if two threads concurrently attempt to acquire such resources out of order. Consider the program of Fig. 3(a), which creates a tree and two threads that work on it. Execution proceeds as follows: 1) When a Tree object is created and assigned to variable tree on line 52, its corresponding atomic set instance, tree.t, protects the root and size fields of the new object. 2) Tree’s constructor on line 5 creates a Node object. The alias declaration on line 3 causes its left, right, value and weight fields to be included in atomic set instance tree.t. 3) The object creations for T1 and T2 on lines 53–54 are standard, with no special operations for atomic sets. 4) Once the workers start (line 55), both threads attempt to invoke insert() on tree. Since insert() is a unit of work for t and both threads operate on the same Tree object, AJ’s runtime system enforces mutual exclusion, by taking a lock upon calling insert() (see Sec. V). Thus, the two operations execute serially. 5) The join() calls on line 55 wait for the workers to finish. Now consider the code in Fig. 3(b), which is similar except that two Tree objects are created and assigned to variables tree1 and tree2 (line 64). Then, two worker threads, T3 and T4, are created on lines 65–66. Note that each worker is passed references to both tree1 and tree2 in the constructor calls, but in a different order. Then, each worker calls copyRoot() on one tree, which in turn calls insert() on the other. These methods are both units of work for atomic set t, so T3 attempts to acquire the lock for tree1.t upon calling copyRoot() and then the lock for tree2.t when it calls insert(). T4 attempts precisely the reverse: it acquires the lock for tree2.t when calling copyRoot() and then the lock for tree1.t when calling insert(). This is a classic situation where deadlock may arise when threads acquire multiple locks in different orders. B. Preventing Deadlock Deadlock can be prevented by totally ordering all possible locks, and always acquiring locks in that order. Our algorithm attempts to find a partial order < on atomic sets, where a < b means that threads never attempt to acquire a lock on an a while holding a lock on a b. That is, any thread that needs both locks simultaneously must acquire a first. If no such order can be found, deadlock is deemed possible. The ordering < between atomic sets reflects transitive calling relationships between their units of work. For each path in the call graph from a method m that is a unit of work for atomic set a to a method n that is a unit of work for atomic set b, we create an ordering constraint a < b. However, if a = b and we can determine that both methods are units of work on the same atomic-set instance, then no ordering constraint needs to be generated, as locks are reentrant. Possible deadlock is reported if, after generating all such constraints, < is not a partial order. While this algorithm is conceptually simple, some complications arise in the presence of atomic set aliasing, when multiple names may refer to the same atomic set. This will be discussed further in Sec. IV. For Fig. 3(a), the algorithm infers that atomic sets t and n are unordered and declares the program deadlock-free, since due to aliasing annotations it can show that all transitive calls between units of work simply result in lock re-entry. For Fig. 3(b), a constraint t < t is inferred, indicating that deadlock may occur, as we have already seen. C. Refactoring against Deadlocks In our experience, many cases of deadlock can be avoided by simple refactorings that order lock acquisition. This can be accomplished using AJ’s unitfor construct, which declares a method to be an additional unit of work for an atomic set in one of its parameters. For example, deadlock can be prevented in Fig. 3(b) by placing a unitfor annotation on the parameter tree of the copyRoot() method as follows: ```java void copyRoot(unitfor(t) Tree tree){ tree.insert(root.getValue()); } ``` Fig. 3. Two clients of the Tree class of Fig. 2. This declares copyRoot() to be a unit of work for atomic set instance tree.t, as well as this.t. When a method is a unit of work for multiple atomic set instances, AJ’s semantics guarantees that the corresponding resources are acquired atomically, thus preventing deadlock in Fig. 3(b). Sometimes, deeper code restructuring is needed before the unitfor construct can be used; Sec. VI gives some examples. D. Recursive Data Structures The basic algorithm sketched above can fail to prove the absence of deadlock in programs that use recursive data structures. Fig. 4 illustrates this with a variant of our binary tree that allows concurrent updates to the weight of different nodes in the same tree. However, insert() should still ensure mutual exclusion to avoid corruption of the tree’s structure. This synchronization policy is implemented by keeping the atomic sets of the tree and of its nodes distinct: the atomic set instances of different Node objects must not be aliased with each other as this would preclude concurrent access to different nodes. In Fig. 4, once a thread has a reference to an INode, it can invoke incWeight() on it. As Node.incWeight() is a unit of work for the node’s atomic set n, no other thread can concurrently access that node. However, since different nodes no longer share the same atomic set instance, incWeight() can be called concurrently on different nodes, as desired. Note that invoking Tree.insert() involves acquiring the lock associated with the tree’s atomic set instance t, thus ensuring the desired mutual exclusion behavior. E. Analyzing the Modified Tree Example Now consider using the tree of Fig. 4 with the client program of Fig. 5. The basic algorithm discussed above would compute an ordering constraint n < n for this program, because Node.insert() recursively invokes itself on the children of the current node. Given the absence of aliasing annotations, these nodes now have distinct atomic set instances, and the basic algorithm concludes that deadlock is possible since it cannot rule out that two threads may access the atomic set instances of different Node objects in different orders. However, it is easy to see that this particular program is deadlock-free, as the recursive calls to insert() traverse the tree in top-down order. Hence, the locks associated with the instances of atomic set n in the traversed nodes are always acquired in a consistent order, precluding deadlock. F. Ordering Annotations To handle recursive data structures, we extend AJ with ordering annotations as shown in Fig. 6. This lets programmers specify an ordering between instances of the same atomic set. The deadlock analysis can then avoid generating constraints of the form a < a when the user-provided ordering indicates that a call cannot contribute to deadlock. Fig. 7 shows how to express an ordering between an atomic set n in a given node, and in each of its children. Given these annotations, our enhanced algorithm (see Sec. V) confirms that the program of Fig. 5 is indeed deadlock-free. Note that programmer-provided ordering annotations are not blindly trusted. The type-checker ensures that the specified order is acyclic while the analysis verifies that it is consistent with lock acquisition order. IV. ALGORITHM A. Auxiliary Definitions Fig. 8 defines auxiliary concepts upon which our algorithm relies. We assume that a call graph of the program has been constructed and that \( \rightarrow \) denotes the calling relationship between methods\(^1\). Function \(\text{uow}\) associates each method with the atomic-set instances for which it is a unit-of-work, including those due to unitfor constructs. Intuitively, \(\text{uow}(m)\) identifies the set of locks that \(m\) acquires (or re-enters) in the current \(\text{ADJ}\) implementation. A lock is an element of \(L\), and is represented as a \textit{set of names} since locks may have many names due to aliasing annotations. Names (elements of \(N\)) are noted as \(*v.A\) where \(*\) is either \(=\) or \(<\), \(v\) is a final method parameter or variable, and \(A\) is the name of an atomic set. If neither \(=\) or \(<\) is specified, then \(=\) is assumed. Names of the form \(<v.A\) are not considered until Sec. IV.C. Fig. 8 also defines \(LBE(m)\) (locks before \textit{entry}), denoting the sets of locks that may be held just before entering method \(m\). In general, different sets of locks may be held when \(m\) is invoked by different callers. It is important to keep these sets of locks distinct, to avoid imprecision in the analysis that could give rise to false positives. Our algorithm effectively performs a context-sensitive analysis by computing a separate set of locks (lockset) for each path in the call graph\(^2\), where locksets are propagated from callers to callees and augmented with locally acquired locks. When locks are passed from caller to callee, names are \(\text{adapted} to the callee, to account for the fact that different name(s) now represent the same lock (see functions \(\text{padaptName}\) and \(\text{padaptLock}\) in Fig. 8). Note that \(\text{padaptName}\) and \(\text{padaptLock}\) use a special symbol \("?"\) to handle cases where a lock cannot be named by a variable in the scope of the callee, and that \(\text{padaptLock}\) relies on function \(\text{addNames}\) to gather additional names that must refer to the same lock due to aliasing annotations.\(^3\) The definition of \(LBE(m)\) consists of two rules: - Rule \(\text{LBE-ENTRY}\) adds the empty lockset to \(LBE(m)\) if \(m\) is an entry point, indicating that no locks are held before the program begins. - Rule \(\text{LBE-CALL}\) takes each lockset that may be held before entering a caller, augments it with the locks that the caller acquires, and then adapts the lockset to the perspective of the callee using \(\text{padaptLock}\). These rules are iterated to a fixed point in order to determine all of the locksets that may be held before entering a method. B. Core Algorithm Fig. 9 defines an ordering \("<"\) on atomic sets using \(LBE(m)\). Intuitively, for atomic sets \(A\) and \(B\) we have \(A < B\) if a lock associated with an instance of atomic set \(A\) may be acquired before a lock that is associated with an instance of atomic set \(B\). Rule \(\text{UOW}\) states that this is the case if there is a method \(m\) and some lockset \(d \in LBE(m)\) that contains a lock named \(v.A\), and we have some \(w.B\) that names a lock in \(\text{uow}(m)\) that is not already held in \(d\).\(^4\) When atomic sets are aliased, we must account for the fact that multiple names may refer to the same lock. In general, generating an ordering constraint \(A < B\) can be avoided when encountering a unit of work for atomic-set instance \(w.B\) if a lock corresponding to atomic-set instance \(w.B\) is already held, and if it can be determined that \(v.A\) and \(w.B\) must refer to the same lock, (in that case the lock is simply re-entered). Two key steps enable us to do this: (i) by keeping locksets separate \(^1\) To simplify the presentation, we assume that a method \(m\) calls another method \(n\) at most once, and that the same variable is not passed for multiple parameters. Our implementation, of course, does not have these restrictions. \(^2\) Note that \(LBE(m)\) could conservatively contain a lockset that is never held before entering method \(m\) if the call graph contains infeasible paths. However, because \(\text{ADJ}\) inserts the necessary lock acquisitions and \(\text{uow}\) reflects this knowledge, the locksets themselves are precise and represent exactly the locks that are held if a particular path in the call graph is traversed. \(^3\) This is not necessary for soundness, but allows the algorithm to more precisely identify lock re-entry. \(^4\) Note that \(\text{uow}\) subtly relies on the fact that \(\text{uow}\) never returns a lock named using \("?"\), since atomic-set instances for which a method is a unit-of-work are always nameable from that method's scope. Hence, there is no danger of failing to generate an ordering constraint because we are re-entering \("?\).B\). for each path in the call graph, we can determine when locks must be held, and (ii) the representation of a lock maintains all its known names (i.e., must-aliases), allowing us to identify situations where locks are re-entered. To be sound, when the analysis generates ordering constraints due to lock acquisition, it must do so for all atomic sets that may be used to name the locks involved. Because alias annotations can be cast away, we cannot rely on local annotations to provide the analysis with all possible may-aliases for a given lock. Therefore, rules SHARE-1 and SHARE-2 conservatively generate additional orderings to account for any annotated constructors in the whole program that could cause instances of two atomic sets to be implemented using the same lock. Rather than naively merging atomic sets that have instances that may be aliased, our analysis uses a transitive ‘\(\sim\)’ (gives) relation and a symmetric ‘\(\sim\)’ (shares) relation. This avoids generating spurious ordering constraints and deadlock reports. The code in Fig. 10 demonstrates why this is needed. Two classes C and D use a utility class List, and each uses an alias annotation that causes the List’s atomic set to be implemented using the lock for its own atomic set. The result is that, although a List may share a lock with either a C or a D, C objects never share locks with D objects. By maintaining this level of precision, we avoid generating a spurious deadlock report at line 127. Lastly, rule TRANS defines ‘\(<\)’ to be transitive. Now, deadlock may occur if ‘\(<\)’ is not a valid partial order. Conversely, if there is no atomic set A such that \(A < A\), then the program is deadlock-free: we have found a valid partial order on atomic sets that is consistent with the order in which new locks are acquired by transitive calls of work. C. Accounting for Ordering Annotations The basic algorithm is unable to infer a partial order among atomic sets in some programs that manipulate recursive data structures. For the program of Fig. 4, the rules of Fig. 9 infer \(n < n\), leading to the conclusion that deadlock might occur. However, as discussed in Sec. III-E, deadlock is impossible in this case because locks are always acquired in a consistent order that reflects how trees are always traversed in the same direction. Intuitively, tracking ordering constraints at the atomic-set level is insufficient in cases where threads recursively execute units of work associated with multiple instances of the same atomic set. Our solution involves having programmers specify ordering annotations that indicate a finer-grained partial order between different instances of the same atomic set, as was illustrated in Fig. 7. We extended the \(\text{AJ}\) type system to allow an atomic set instance to be ordered relative to exactly one other atomic set instance when it is constructed. The type system ensures that the object to which the newly constructed object is being related is already completely constructed, preventing objects that are being constructed simultaneously from specifying conflicting orders relative to one another. Since the programmer is restricted to giving a single constraint at object creation time, with respect to a completely constructed object, a cycle in the specified order is impossible. The type system then ensures that this order is respected by any dataflow that carries the ordering annotation. Finally, the analysis verifies that the programmer-specified, acyclic ordering is consistent with lock acquisition order, signaling potential deadlock if units of work for different instances of an atomic set may be entered out of the specified order. \[ \frac{d \in LBE(m) \quad l_1 \in d \quad l_2 \in \text{UOW}(m)\quad v.A \in l_1 \quad w.B \in l_2 \quad l_3 \in d \Rightarrow w.B \not\in l_3}{A < B} \quad (\text{UOW}) \] \[ \frac{\text{addNames}(m, l) \{ l \cup \{ *w.B \mid *v.A \in l \text{ and } w.B \text{ is annotated to be an alias for } v.A \text{ in } m \text{’s scope } \} \cup \{ < x.A \mid *v.A \in l \text{ and } x.A \text{ is annotated to be greater than } v.A \text{ in } m \text{’s scope } \} \}}{d \in LBE(m) \quad l_1 \in d \quad l_2 \in \text{UOW}(m) \quad v.A \in l_1 \quad w.B \in l_2 \quad l_3 \in d \Rightarrow w.B \not\in l_3 \quad < w.B \not\in l_1}{A < B} \quad (\text{UOW}) \] Fig. 11 updates our analysis to soundly accommodate untrusted, user-specified orderings between atomic set instances. Function addNames now consults the ordering annotations available within a method and its enclosing class. Any atomic-set instance specified to be greater than a given instance is added to the lock’s representation and prefixed with a ‘<’ to indicate that it is not a must-alias, but rather a lock that is safe to enter after the represented lock. Rule UOW now avoids generating an ordering constraint due to one lock being held when another is acquired if the former is “less” than the latter. If the analysis indicates deadlock-freedom, then it has found a valid partial order on all atomic set instances in the program that is consistent with the order in which threads acquire them. The ordering is made up of a coarse-grained ordering on atomic sets that indicate ordering between all instances of two atomic sets, and a fine-grained ordering among instances of a single atomic set as indicated by the user’s annotations. An informal correctness argument can be found in [8]. D. Example Let us consider the behavior of our analysis on the example program in Fig. 2 and its client in Fig. 3(a). The relevant facts discovered by our analysis are shown in Fig. 12(a) along with an indication of the rules and facts used to derive them. Note that the facts shown in the figure incorporate an optimization where names of form \( \text{?...} \) are dropped from a lock’s set representation if it also contains a must-alias not involving \( \text{?} \). See Sec. V for why this is safe. From LBE-ENTRY, we know that LBE(T.run) contains the empty lockset. Using this fact in the premise of LBE-CALL, we derive \( \emptyset \in LBE(\text{Tree.insert}) \). For the call from Tree.insert() to Node.insert(), LBE-CALL makes the following calculations: - \( \emptyset \in LBE(\text{Tree.insert}) \), uow(Node.insert) = \{ \{ \text{this.t} \} \} - \( \{ \text{this.t} \} \in \emptyset \cup \{ \{ \text{this.t} \} \} - addNames(Tree.insert, \{ \text{this.t} \}) = \{ \text{this.t, root.n} \} - padaptName(Tree.insert, this, Node.insert) = ? - padaptName(Tree.insert, root, Node.insert) = this - padaptLock(Tree.insert, \{ \text{this.t} \}, Node.insert) = \{ \text{?t, this.n} \} After removing the unnecessary name involving \( ? \), we get \( \{ \{ \text{this.n} \} \} \in LBE(\text{Node.insert}) \). Note that \( ?t \) can be dropped because the must-alias this.n is a more exact name for the lock in this context. The recursive calls to Node.insert() result in the same lockset, so no additional facts are derived using LBE-CALL. Furthermore, no ordering facts can be derived: the only method with a non-empty lockset upon entry is Node.insert(), and that lockset already contains the lock for which the method is a unit of work, preventing rule UOW from generating an ordering constraint. Since the empty ordering relation is a valid partial order, the program is declared deadlock-free. The remainder of Fig. 12 shows the relevant facts derived for the other examples from Figs. 3(b) and 5. V. IMPLEMENTATION We implemented the deadlock analysis as an extension of our existing proof-of-concept AJJ-to-Java compiler [3], which is an Eclipse plugin project. In this implementation, data-centric synchronization annotations are given as special Java comments. These comments are parsed and given to the type checker and deadlock analysis. Type errors such as the use of inconsistent ordering annotations are reported using markers in the Eclipse editor. If type-checking and the deadlock analysis succeed, the AJJ source is translated to Java, and written into a new project that holds the transformed code. This project can then be compiled to bytecode, and executed using a standard JVM. More details on the implementation can be found in [3]. The deadlock analysis relies on the WALA program analysis framework for the construction of a call graph. The analysis first determines all entry points to the program (e.g., main() methods and the run() methods of threads), and then builds a conservative approximation of the program’s call graph. The propagation of atomic sets in our analysis is essentially a distributive data flow problem, so we are able to leverage WALA’s efficient Interprocedural Finite Distributive Subset solver [9]. Our actual implementation works slightly harder than the formal rules of Sec. IV in gathering and propagating information gleaned from aliasing and ordering annotations, allowing, e.g., final fields of method parameters to be included in lock names. As mentioned, lock identifiers involving \( ? \) are \[\text{See walac.sourceforge.net.}\] \[\text{Reflection must be approximated as with most static program analyses.}\] threading, and we created two versions with well-defined Jcurzez crawler that recursively downloads all pages from a web site. Projects related to data race detection. have been used by several other researchers (e.g., \cite{10}) in a number of existing multi-threaded Java programs into overhead and performance of A previous project that focused on evaluating the annotation in Table I. These programs were created in the context of A. Subject Programs We analyzed a collection of AJ programs with our imple- mentation in order to answer the following research questions: RQ1 How successful is the analysis in demonstrating the absence of deadlock in AJ programs? RQ2 How often are program transformations and ordering annotations necessary to prove the absence of deadlock? RQ3 What is the running time of the analysis? A. Subject Programs The subject AJ programs used in this evaluation are shown in Table I. These programs were created in the context of a previous project that focused on evaluating the annotation overhead and performance of AJ \cite{3}, by manually converting a number of existing multi-threaded Java programs into AJ. Details about this conversion effort are discussed in \cite{3}. The programs were obtained from several different sources and reflect a variety of programming styles. Elevator and tsp have been used by several other researchers (e.g., \cite{10}) in projects related to data race detection. Weblech is a web crawler that recursively downloads all pages from a web site. Jcurzez allows building text-based user interfaces for simple terminals. The original jcurzez code did not support for multi- threading, and we created two versions with well-defined behavior in the presence of concurrency: jcurzez1 achieves this behavior in a coarse-grained fashion while jcurzez2 does so using more fine-grained synchronization. Cewolf is a framework for creating graphical charts. Jphoneelite is a Java SIP voice over IP SoftPhone for computers. Tuplesoup is a small Java-based framework for storing and retrieving simple hashes. Mailpuccino is a Java email client. Finally, specjbb is a widely used multi-threaded performance benchmark. All subject programs except tsp, weblech, and jcurzez rely on AJ versions of Java collections (e.g, TreeMap, ArrayList), which therefore must be analyzed as well in those cases. Table I shows some key characteristics of the subject programs, including the number of lines of source code, the number of files, and the number of data-centric syncroniza- tion constructs. The row labeled “collections” is not a stand- alone subject program but rather displays the characteristics of the collection classes from the java.util package that we converted to AJ. The actual subject programs report only “yes” or “no” in this LOC column for collections to indicate whether they use these classes or not and thus whether the collection code was examined by the analysis. As is apparent from the data, the number of atomic sets in the subject programs is small, ranging from 1 to 18. specjbb includes the largest number of fields in atomic sets (34 fields, and 15 entire classes). This is the case because a complex web of data structures is accessed and updated by multiple threads in this benchmark. unitfor annotations and aliasing are limited in application code but plentiful in the library classes. B. Deadlock Analysis In the absence of ordering annotations, our analysis guaran- tees the absence of deadlock in all but one of the subject programs (jcurzez2). Demonstrating the absence of deadlock in that program required 4 ordering annotations. Table II also shows the number of locksets that the algorithm generates during its analysis (i.e., the size of set \mathcal{D} in Fig. 8) as well as the running time of the analysis on each subject program. Experiments were run on a MacBook Air with a 1.8 GHz Intel Core i5 processor and 4GB of RAM. Even in its current unoptimized state, the analysis takes at most 75 seconds. \begin{table} \centering \caption{AJ subject programs. The table shows, for each subject program, the number of lines of source code (including white space and comments), files and data-centric annotations (one subcolumn for each type of annotation).} \begin{tabular}{|c|c|c|c|c|c|c|c|c|} \hline benchmark & LOC program & collections & files & atomic-set & atomic (class) & atomic (field) & data-centric annotations & alias & notunitfor & total \\ \hline collections & 0 & 10846 & 63 & 5 & 0 & 53 & 40 & 330 & 0 & 428 \\ elevator & 609 & yes & 6 & 1 & 1 & 0 & 0 & 6 & 0 & 8 \\ tsp & 734 & no & 6 & 2 & 2 & 0 & 0 & 0 & 0 & 4 \\ weblech & 1971 & no & 14 & 2 & 0 & 4 & 0 & 0 & 0 & 6 \\ jcurzez1 & 6659 & no & 49 & 5 & 2 & 7 & 15 & 24 & 0 & 53 \\ jcurzez2 & 6633 & no & 49 & 4 & 3 & 2 & 6 & 4 & 0 & 19 \\ tuplesoup & 7217 & yes & 40 & 7 & 5 & 11 & 12 & 0 & 46 & 81 \\ cewolf & 14002 & yes & 129 & 6 & 6 & 0 & 0 & 2 & 0 & 14 \\ mailpuccino & 14519 & yes & 135 & 14 & 13 & 1 & 0 & 0 & 0 & 28 \\ jphoneelite & 16484 & yes & 105 & 14 & 10 & 26 & 0 & 8 & 0 & 58 \\ specjbb & 17730 & yes & 64 & 18 & 15 & 34 & 1 & 24 & 4 & 80 \\ \hline \end{tabular} \end{table} TABLE II ANALYSIS RESULTS. THE TABLE SHOWS, FOR EACH SUBJECT PROGRAM, THE NUMBER OF ORDERING ANNOTATIONS REQUIRED TO GUARANTEE THE ABSENCE OF DEADLOCK, AND THE RUNNING TIME OF OUR ANALYSIS. <table> <thead> <tr> <th>Subject Program</th> <th>Ordering annotations</th> <th>locksets</th> <th>Time [s]</th> </tr> </thead> <tbody> <tr> <td>elevator</td> <td>0</td> <td>39</td> <td>1.0</td> </tr> <tr> <td>tsp</td> <td>0</td> <td>33</td> <td>1.4</td> </tr> <tr> <td>webлеч</td> <td>0</td> <td>39</td> <td>4.6</td> </tr> <tr> <td>jcurzez1</td> <td>0</td> <td>409</td> <td>10.3</td> </tr> <tr> <td>jcurzez2</td> <td>4</td> <td>541</td> <td>9.4</td> </tr> <tr> <td>tuplesoup</td> <td>0</td> <td>785</td> <td>8.8</td> </tr> <tr> <td>cewolf</td> <td>0</td> <td>25</td> <td>19.7</td> </tr> <tr> <td>mailpuccino</td> <td>0</td> <td>205</td> <td>48.2</td> </tr> <tr> <td>jphonelite</td> <td>0</td> <td>34</td> <td>7.2</td> </tr> <tr> <td>specjbb</td> <td>0</td> <td>414</td> <td>75.1</td> </tr> </tbody> </table> Fig. 13. Excerpt from jcurzez2 requiring ordering annotations. For the majority of our subject programs (7 out of 10), deadlock-freedom could be demonstrated without any programmer intervention. Both specjbb and tuplesoup required some slight refactoring in order to eliminate spurious deadlock reports. In both cases, component objects of a parent object kept a reference to their parent object in a field. Later, the analysis was unable to infer the equality of the parent that called a method in a child object and the object stored in the child’s parent field. We refactored the problematic calls to pass an instance of the parent as a parameter to the child’s method. C. Threats to Validity A critical reader might argue that the subject programs are small, and that they do not adequately represent concurrent programming styles that occur in practice. Obtaining suitable subject programs is a challenge for us, because AJ is a research language without real users. The AJ programs used in this evaluation were converted from Java as part of our previous work on evaluating the annotation overhead and performance of AJ [3]. Their construction predates this work on deadlock analysis and we used all AJ programs that were available. The analyzed code includes AJ versions of collections such as TreeMap and ArrayList and all of their associated auxiliary data structures (e.g., map entries and iterators), which are quite complex. Furthermore, our subject programs include specjbb, a widely-used performance benchmark, and several programs that other researchers used in research on concurrency errors. Therefore, based on the current results, we are optimistic that the proposed deadlock analysis will scale to bigger programs. VII. RELATED WORK Deadlock detection, prevention and avoidance is well trodden ground. In this section, we focus on static techniques. Static analysis. At heart, all static analysis techniques attempt to detect cyclic waits-on relationships between tasks. To this end, they construct abstractions of the program’s control flow, tasking and synchronization behavior. Cycles in these graphs correspond to possible deadlock. The precision of the analysis depends on ruling out cycles that cannot happen in practice. Masticola’s work [5] is one example, and includes an extensive discussion of prior work. To prove the absence of deadlocks caused by resource acquisition, a common approach is to statically look for an order on resources such that no task ever holds a resource while requesting a lesser one. Saxena [11] explored this approach in the context of concurrent Pascal code where all shared resources can be enumerated. Engler and Ashcraft [6] apply this approach to the analysis of large C programs, but abstract any non-global lock resource by the name of the type in which it is stored. Williams et al. [12] propose a lock-ordering based deadlock analysis for Java, focusing on analyzing libraries in the absence of client code. Our analysis follows this traditional approach of finding an order for resources, leveraging the declarative nature of AJ by using atomic set instances as a sound and effective abstraction for locks. Generating deadlock-free code. Golan-Gueta et al. [13] demonstrate a technique for generating fine-grained, deadlock-free locking code for tree- and forest-based data structures. They introduce a strategy called domination locking to achieve this. AJ cannot support domination locking, but it provides a declarative way to write deadlock- and race-free code for VI. EXPERIMENTAL VALIDATION In summary, the research questions posed at the beginning of this section can be answered as follows: RQ1: The analysis was able to prove the absence of deadlock in all 10 of the subject programs that we considered. RQ2: Two programs required minor refactorings before the absence of deadlock could be demonstrated. One program relied on recursive data structures that necessitated the introduction of 4 ordering annotations. For the remaining 7 programs, no programmer intervention was needed. RQ3: The running time of the analysis is at most 75 seconds in all cases. general-purpose programs. Emmi et al. [14] use integer linear programming (ILP) to infer a locking strategy for programs written with atomic blocks in versions of C and Java. They impose ordering constraints on lock acquisition in order to avoid generating programs that can deadlock. AJ provides more programmer control over the level of concurrency and the desired behavior than this approach. **Type systems.** Type-based approaches that address deadlock typically rely on an underlying type and effect system that exposes the locking behavior in type signatures and provides some mechanism to control aliasing. Boudol’s work is a good example [15]: It defines a deadlock-free semantics for an imperative language and a type and effect system for deadlock avoidance. In his work, singleton reference types allow reasoning about precise aliasing relationships between pointers and their locks. Geriakos et al. [16] extend this approach to unstructured locking and report low runtime overhead. Boyapati et al. [17] describe another such system where the notion of ownership [18] is used to restrict aliasing. In their work, a Java-like language is extended with ownership annotations and lock levels. Each lock has an associated lock level, and methods are annotated with the keyword locks to indicate they acquire locks at a given level. The type system ensures that locks are acquired in descending order. Gordon et al. [19] focus on fine-grained locking scenarios that involve concurrent data structures such as circular lists and mutable trees, where it is difficult to impose a strict total order on the locks held simultaneously by a thread. The approach relies on a notion of lock capabilities: Associated with each lock is a set of capabilities to acquire further locks, and deadlock-freedom is demonstrated by proving acyclicity of the capability-granting relation. Inference algorithms have been proposed to reduce the annotation burden. Agarwal et al. [20] present a type inference algorithm that infers locks-clauses for Boyapati’s type system. In programs that cannot be typed, a generalization of GoodLock [21] is used for runtime detection. Vasconcelos et al. [22] define a type inference system for a typed assembly language that defines a partial order in which locks have to be acquired. Their system supports non-structured locks in a cooperative multi-threading environment where threads may be suspended while holding locks. Our approach relies on a static analysis that leverages the declarative nature of synchronization in AJ to prove deadlock-freedom. Programmer-supplied ordering annotations are required only in relatively rare cases when a recursive data structure with fine-grained synchronization is manipulated concurrently. Our results suggest that this hybrid approach successfully avoids common pitfalls, such as the false positives reported by some static analyses, and the heavy notational burden of some type-based approaches. **VIII. CONCLUSIONS** We presented an analysis for detecting possible deadlock in AJ programs. The analysis is a variation on existing deadlock-prevention strategies [5, 6] that impose a global order on locks and check that locks are always acquired in accordance with that order. The declarative nature of synchronization in AJ enables us to compute an analogous ordering on atomic sets that reflects the invocations from units of work on one atomic to units of work on another. For recursive data structures, this coarse-grained ordering sometimes does not suffice. Therefore, we added ordering annotations to AJ that enable programmers to specify an order between different instances of an atomic set, and we extend our analysis to soundly take these untrusted ordering annotations into account. We extended our AJ implementation to type-check ordering annotations, and incorporated the deadlock analysis in the type checker. In an evaluation of the algorithm, all 10 AJ programs under consideration were shown to be deadlock-free. One program needed 4 ordering annotations and 2 others required minor refactorings. For the remaining 7 programs, no programmer intervention of any kind was required. **REFERENCES**
{"Source-Url": "http://www.franktip.org/pubs/icse2013deadlock.pdf", "len_cl100k_base": 10731, "olmocr-version": "0.1.50", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 42114, "total-output-tokens": 12709, "length": "2e13", "weborganizer": {"__label__adult": 0.0003509521484375, "__label__art_design": 0.0002160072326660156, "__label__crime_law": 0.0003676414489746094, "__label__education_jobs": 0.0004172325134277344, "__label__entertainment": 4.523992538452149e-05, "__label__fashion_beauty": 0.00013971328735351562, "__label__finance_business": 0.00013577938079833984, "__label__food_dining": 0.0003075599670410156, "__label__games": 0.00048470497131347656, "__label__hardware": 0.0008378028869628906, "__label__health": 0.0004580020904541016, "__label__history": 0.0002058744430541992, "__label__home_hobbies": 8.71419906616211e-05, "__label__industrial": 0.00032138824462890625, "__label__literature": 0.0001971721649169922, "__label__politics": 0.0002675056457519531, "__label__religion": 0.00043845176696777344, "__label__science_tech": 0.00847625732421875, "__label__social_life": 8.07046890258789e-05, "__label__software": 0.00360870361328125, "__label__software_dev": 0.9814453125, "__label__sports_fitness": 0.00032830238342285156, "__label__transportation": 0.0005536079406738281, "__label__travel": 0.0001938343048095703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 49636, 0.03942]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 49636, 0.20983]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 49636, 0.88312]], "google_gemma-3-12b-it_contains_pii": [[0, 5256, false], [5256, 10211, null], [10211, 14799, null], [14799, 18298, null], [18298, 22972, null], [22972, 27322, null], [27322, 32106, null], [32106, 37269, null], [37269, 42387, null], [42387, 49636, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5256, true], [5256, 10211, null], [10211, 14799, null], [14799, 18298, null], [18298, 22972, null], [22972, 27322, null], [27322, 32106, null], [32106, 37269, null], [37269, 42387, null], [42387, 49636, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 49636, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 49636, null]], "pdf_page_numbers": [[0, 5256, 1], [5256, 10211, 2], [10211, 14799, 3], [14799, 18298, 4], [18298, 22972, 5], [22972, 27322, 6], [27322, 32106, 7], [32106, 37269, 8], [37269, 42387, 9], [42387, 49636, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 49636, 0.03352]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
ea4478f07600b223a78adb86c74017921bf2d981
Jekyll on iOS: When Benign Apps Become Evil Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee, Georgia Institute of Technology This paper is included in the Proceedings of the 22nd USENIX Security Symposium. August 14–16, 2013 • Washington, D.C., USA ISBN 978-1-931971-03-4 Open access to the Proceedings of the 22nd USENIX Security Symposium is sponsored by USENIX Jekyll* on iOS: When Benign Apps Become Evil Tielei Wang, Kangjie Lu, Long Lu, Simon Chung, and Wenke Lee School of Computer Science, College of Computing, Georgia Institute of Technology {tielei.wang, kangjie.lu, long, pchung, wenke}@cc.gatech.edu Abstract Apple adopts the mandatory app review and code-signing mechanisms to ensure that only approved apps can run on iOS devices. In this paper, we present a novel attack method that fundamentally defeats both mechanisms. Our method allows attackers to reliably hide malicious behavior that would otherwise get their app rejected by the Apple review process. Once the app passes the review and is installed on an end user’s device, it can be instructed to carry out the intended attacks. The key idea is to make the apps remotely exploitable and subsequently introduce malicious control flows by rearranging signed code. Since the new control flows do not exist during the app review process, such apps, namely Jekyll apps, can stay undetected when reviewed and easily obtain Apple’s approval. We implemented a proof-of-concept Jekyll app and successfully published it in App Store. We remotely launched the attacks on a controlled group of devices that installed the app. The result shows that, despite running inside the iOS sandbox, Jekyll app can successfully perform many malicious tasks, such as stealthily posting tweets, taking photos, stealing device identity information, sending email and SMS, attacking other apps, and even exploiting kernel vulnerabilities. 1 Introduction Apple iOS is one of the most popular and advanced operating systems for mobile devices. By the end of June 2012, Apple had sold 400 million iOS devices [30], such as iPhone, iPad and iPod touch. Despite the tremendous popularity, in the history of iOS, only a handful of malicious apps have been discovered [24]. This is mainly attributed to the advanced security architecture of iOS and the strict regulations of the App Store. *Jekyll is a character with dual personalities from the novel The Strange Case of Dr. Jekyll and Mr. Hyde. In addition to the standard security features like Address Space Layout Randomization (ASLR), Data Execution Prevention (DEP), and Sandboxing, iOS enforces the mandatory App Review and code signing mechanisms [31]. App Review inspects every app submitted by third parties (in binary form) and only allows it to enter the App Store if it does not violate App Store’s regulations [5]. To further prohibit apps distributed through channels other than the App Store (i.e., unsigned apps), the code signing mechanism disallows unsigned code from running on iOS devices. As a result, all third-party apps running on iOS devices (excluding jailbroken devices [48]) have to be approved by Apple and cannot be modified after they have obtained the approval. According to the official App Review guidelines [5], developers should expect their apps to go through a thorough inspection for all possible term violations. During this process, many reasons can lead to app rejections, such as stealing data from users and using private APIs reserved for system apps. Although the technical details of the review process remain largely unknown, it is widely believed that such a selective and centralized app distribution model has significantly increased the difficulty and cost for malicious or ill-intended apps to reach end users. In this paper, we present a new attack method against the App Store reviewing process and the code signing mechanism. Using this method, attackers can create malicious or term-violating apps and still be able to publish them on App Store, which in turn open up new attack surfaces on iOS devices. We stress that our attack does not assume any specifics about how Apple reviews apps, but targets theoretical difficulties faced by any known methods to analyze programs. By demonstrating the power of this practical attack, we highlight the shortcomings of the pre-release review approach and call for more runtime monitoring mechanisms to protect iOS users in the future. The key idea behind our attack is that, instead of sub- mitting an app that explicitly contains malicious functionalities to Apple, the attacker plants remotely exploitable vulnerabilities (i.e., backdoor) in a normal app, decomposes the malicious logic into small code gadgets and hides them under the cover of the legitimate functionalities. After the app passes the App Review and lands on the end user device, the attacker can remotely exploit the planted vulnerabilities and assemble the malicious logic at runtime by chaining the code gadgets together. Figure 1 shows the high level idea. On the left is the app’s original control flow graph (CFG), which is what can be observed during the app review process, without the planted vulnerability being exploited. In comparison, on the right is the effective control flow graph the same app will exhibit during runtime, which differs from the left in the new program paths (represented by the dotted paths) introduced at runtime by the remote attackers (i.e., app developers). Since attackers can construct malicious functionalities through dynamically introducing new execution paths, even if the vetting process could check all possible paths in the left CFG (i.e., 100% path coverage), it cannot discover the malicious logic that is only to be assembled at runtime as per attacker’s commands. Apps so constructed bear benign looks and yet are capable of carrying out malicious logic when instructed; we call them Jekyll apps. By carefully designing the vulnerabilities and crafting the gadgets, Jekyll apps can reliably pass app review process and open up a new attack surface on iOS devices when installed. Specifically, an attacker can achieve the following general tasks via Jekyll apps: First, Jekyll apps offer an approach to stealthily abuse user privacy and device resources, for instance, via private APIs\(^1\), which may provide unrestricted access to certain sensitive resources and are intended for Apple’s internal use only. Explicit use of private APIs almost always gets an app rejected by App Store [4]. However, Jekyll apps can dynamically load, locate, and implicitly invoke the private APIs and thus reliably bypass the review checks. Comparing with simple obfuscation techniques (e.g., [7, 23, 25]), our approach hides the usage of private APIs in a way that is more resilient to non-trivial code analysis — without correctly triggering the planted vulnerabilities and arranging the code gadgets, the invocation of private APIs never appears in the code and execution of Jekyll apps. Second, Jekyll apps open a window for attackers to exploit vulnerabilities in kernel space. Although the sandboxing policy in iOS limits the possibility and impact of exploiting kernel vulnerabilities [22] by third-party apps, certain attacks are still effective against vulnerable device drivers (i.e., IOKit drivers [49]). Third, Jekyll apps also serve as a trampoline to attack other apps. On iOS, by requesting a URL, an app can launch another app that has registered to handle that URL scheme. However, this simplified IPC (Inter-process communication) mechanism may facilitate inter-app attacks. For instance, once new vulnerabilities have been found in Mobile Safari (the built-in web browser in iOS), an attacker can set up a malicious webpage exploiting such vulnerabilities, use the Jekyll app to direct the Mobile Safari to visit the booby-trapped website, and eventually compromise the browser app. Given the high privileges granted to Mobile Safari, the compromised browser will in turn provide the stepping stone for more powerful attacks, such as untethered jailbreak, as shown by the JailbreakMe attack [1] on old versions of iOS. <table> <thead> <tr> <th>Attack Type</th> <th>Attack Description</th> <th>Affected Version</th> </tr> </thead> <tbody> <tr> <td>Abuse Device Resources</td> <td></td> <td></td> </tr> <tr> <td>Sending SMS</td> <td>iOS 5.x</td> <td></td> </tr> <tr> <td>Sending Email</td> <td>iOS 5.x</td> <td></td> </tr> <tr> <td>Posting Tweet</td> <td>iOS 5.5 &amp; 6.x</td> <td></td> </tr> <tr> <td>Abusing Camera</td> <td>iOS 5.5 &amp; 6.x</td> <td></td> </tr> <tr> <td>Dialing</td> <td>iOS 5 &amp; 6.x</td> <td></td> </tr> <tr> <td>Manipulating Bluetooth</td> <td>iOS 5.5 &amp; 6.x</td> <td></td> </tr> <tr> <td>Stealing Device Info</td> <td>iOS 5.5 &amp; 6.x</td> <td></td> </tr> <tr> <td>Attack Kernel</td> <td>Rebooting system</td> <td>iOS 5.x</td> </tr> <tr> <td>Attack Other Apps</td> <td>Crashing Mobile Safari</td> <td>iOS 5.5 &amp; 16.x</td> </tr> </tbody> </table> Table 1: Attack summary on iPhone We have implemented a proof-of-concept Jekyll app and submitted it to the App Store. The app successfully passed Apple’s review despite the hidden vulnerabilities and code gadgets that can be assembled to carry out malicious logic. Following the ethical hacking practice, we immediately removed the app from App Store once a group of experiment devices of our control had downloaded it. The download statistic provided by Apple later confirmed that the app had never been downloaded by any other users. By exploiting the vulnerabilities and chaining the planted gadgets in the app, we \(^1\)Private APIs are undocumented and often security-critical APIs on iOS, see Section 2.2 for details. remotely launched many malicious operations on our experiment devices, as summarized in Table 1. Even on iOS 6.1.2, the latest version of iOS at the time of our experiments, the Jekyll app can abuse the camera device to recode videos, post tweets, steal device identity information such as IMEI (the unique device identifier), manipulate the Bluetooth device, attack Mobile Safari, and dial arbitrary number. We made a full disclosure of our attack scheme to Apple in March 2013 and have since been in correspondence with Apple. In summary, the main contributions of our work are as follows: - We propose a novel method to generate iOS apps that can pass App Review and synthesize new control flows as instructed remotely during runtime, without violating code signing. We call such malicious apps Jekyll apps. Given that arbitrary control flows can be introduced to such apps at runtime, the code signing mechanism on iOS is totally defenseless against Jekyll apps. - We are the first to propose a dynamic analysis technique to discover the private APIs used to post tweets, send email, and send SMS without user’s consent on iOS. We incorporate these attacks, along with a set of previously known iOS attacks, into a Jekyll app to show its versatility. - We successfully publish a proof-of-concept Jekyll app in Apple App Store and later launch remote attacks to a controlled group. - We demonstrate that the security strategy to solely rely on pre-install review, as currently followed by Apple App Store, is ineffective against Jekyll apps and similar attacks. We discuss and advocate runtime security measures as a necessary step in advancing iOS security. The rest of the paper is organized as follows. Section 2 introduces the background. Section 3 presents a motivating example and describes the design of our attack scheme. Section 4 demonstrates some of the malicious operations that can be carried out by Jekyll apps. Section 5 gives the implementation details and Section 6 compares our research to related work. Section 7 discusses the potential countermeasures against our attack and Section 8 concludes the paper. 2 Background 2.1 iOS Security iOS provides a rich set of security features. We briefly introduce the related exploit mitigation mechanisms here. Interested readers are referred to [31, 38] for the overall security architecture of iOS. **DEP and ASLR.** Apple introduced the Data Execution Prevention (DEP) mechanism in iOS 2.0 and later the Address Space Layout Randomization (ASLR) mechanism in iOS 4.3 [21]. The DEP mechanism in iOS is based on the NX (eXecute Never) bit supported by the ARM architecture and the kernel prevents third party apps from requesting memory pages that are writeable and executable at the same time. Since data pages such as the stack and heap are marked non-executable and code pages are marked executable but non-writeable, DEP prevents the traditional code injection attacks that need to write payloads into memory and execute them. ASLR randomizes a process’s memory layout. If a third-party app is compiled as a position-independent executable (PIE), the locations of all memory regions in its process’s address space, including the main executable, dynamic libraries, stack, and heap, are unpredictable. As an important complementary to DEP, ASLR makes it very difficult for attackers to launch return-to-libc based or return-oriented programming based attacks (see Section 2.3). However, ASLR in iOS only enforces the module level randomization, that is, executable modules are loaded into unpredictable memory regions, but the internal layout of each module remains unchanged. Thus, the ASLR implementation is vulnerable to information leakage vulnerabilities [45]. If an attacker can obtain the absolute address of a function in a module, she is able to infer the memory layout of that entire module. **Privilege Separation and Sandboxing.** iOS employs traditional UNIX file permission mechanisms to manage the file system and achieve the basic privilege separation. While all third-party apps run as the non-privileged user mobile, only a few most import system processes run as the privileged user root. As a result, third-party apps are not able to change system configurations. To enforce isolation among apps that all run as the same user mobile, iOS utilizes the sandboxing mechanism. iOS sandbox is implemented as a policy module in the TrustedBSD mandatory access control framework [8]. Each app contains a plist file in XML format, which declares a set of entitlements for the special capabilities or security permissions in iOS. When an app is launched, iOS determines its sandbox policy according to its entitlements. Although the built-in apps in iOS, such as Mobile Safari, run as the non-privileged user mobile, they may be granted with special privileges via reserved entitlements. For instance, Mobile Safari has an entitlement called dynamic-codesigning, which allows Mobile Safari to allocate a writable and executable memory buffer and generate executable code on the fly—a security exception made for Mobile Safari’s Just-in-Time (JIT) JavaScript engine to improve performance. As for third-party apps, Apple applies a one-size-fits-all sandbox policy called container. According to the study in [51], in iOS 4.3, this permissive policy allows third-party apps to read the user’s media library, interact with a few IOKit User Clients, communicate with the local Mach RPC servers over the bootstrap port, access the network, etc. On top of the default access granted by the container policy, third party apps can also request for two extra entitlements: one for using the iCloud storage and one for subscribing to the push notification service. Finally, even though the container policy has undergone significant improvements and is becoming more restrictive over time, as we show in this paper, our Jekyll app, even running in sandbox, still poses a significant threat to the user’s privacy and system security. Also, in contrast to other mobile platforms, such as Android, which use the declarative permissions to regulate each app individually, iOS applies the default sandbox configuration on most third-party apps, which consequently share the same broad set of privileges. As of iOS 6, only a few sensitive operations, such as accessing location information and contact book and sending push notifications, have to be explicitly acknowledged by users before they can proceed. **Code signing, App Store, and App Review.** Along with the release of iOS 2.0 in 2008, Apple opened the App Store, an application distribution platform for iOS devices. Third-party developers are required to submit their apps to App Store for distribution. Since then, iOS has enforced the mandatory code signing mechanism to ensure only the executables that have been approved and signed by Apple are allowed to run on iOS devices. The study in [37] presents the implementation details of iOS code signing mechanism. In comparison with DEP, code signing mechanism is more strict. In a DEP-enabled system, attackers can compromise a process using ROP attacks and then download a new binary and run it. This does not apply to iOS because iOS will refuse to run the new binary if it is not signed by a trusted authority. To release an app through App Store, a third-party developer has to participate in Apple’s iOS developer program and submit the app to Apple for review. The app is signed and published by Apple only after it passes the review process. In addition to business benefits, the mandatory review process helps Apple prevent malicious apps from entering App Store. ### 2.2 Public and Private Frameworks iOS provides the implementation of its system interfaces in special packages called frameworks. A framework is a directory that contains a dynamic shared library and the related resources such as images, localization strings, and header files. Native iOS apps are built on top of these frameworks and written in the Objective-C programming language, a superset of C language. Besides the public frameworks, iOS also contains a set of private frameworks that are not allowed to be used in third-party apps. Even in public frameworks, there are some undocumented APIs (i.e., private APIs) that cannot be used by third-party apps. In fact, these private frameworks and APIs are reserved for the built-in apps and public frameworks. Apple ships all public and private frameworks as part of the iOS Software Development Kit (SDK). Third-party developers can find all these frameworks in their own development environment. It is worth noting that, since iOS 3.x, Apple has combined all frameworks into a single cache file called dyld_shared_cache in iOS devices to improve performance [21]. Moreover, the creation of dynamic libraries by third-party developers is not supported by the iOS SDK, which makes the public frameworks the only shared libraries to link in iOS apps. To prevent apps from dynamically loading private frameworks or unofficial libraries, some standard UNIX APIs are also considered as private by Apple, such as dlopen and dlSYM that support runtime loading of libraries. During the app review process, linking to private frameworks or importing private APIs can directly result in app rejections from Apple App Store. ### 2.3 Code Reuse and ROP Attack Reusing the code within the original program is an effective way to bypass DEP and code signing mechanism. Solar Designer first suggested return-to-libc [16], which reuses existing functions in a vulnerable program to implement attacks. Shacham et al. proposed the Return-Oriented Programming (ROP) exploitation technique in 2007 [44]. The core idea behind ROP attacks is to utilize a large number of instruction sequences ending with ret-like instructions (e.g., ret on x86 and pop(pc) on ARM) in the original program or other libraries to perform certain computation. Since attackers can control the data on the stack and ret-like instructions will change the execution flow according to the data on the stack, a crafted stack layout can chain these instruction sequences together. Figure 2 shows a simple ROP example that performs addition and storage operations on the ARM platform. Specifically, constant values 0xdeadbeaf and 0xffffffff are loaded to the registers r1 and r2 by the first two gadgets, respectively. Next, an addition operation is performed by the third gadget. At last, the addition result (0xdeadbeaf) is stored on the stack by the fourth gadget. However, our example app (as shown in Figure 3) does not contain any feasible code path to leak the address book after reading it at line 2. As such, our example app appears to be compliant with Apple’s privacy policy and can be expected to pass the app review. To achieve the goal of stealing the user’s contact while avoiding the direct approach that will guarantee rejection by App Store, the attacker instead hides vulnerabilities in the `ConnectToServerAndDownloadGreetingCards` function (line 1 in Figure 3). Subsequently, when the app runs on a victim’s iOS device and tries to download greeting cards from the server controlled by the attacker, the server exploits the planted vulnerabilities to remotely manipulate the app’s stack into the one shown on the right side of Figure 3. The contaminated stack layout will change the original control flows of the app. Instead of sequentially executing the statements from line 2 to line 6, the compromised app first reads the address book into a buffer (line 2 in Figure 3), and then directly invokes the `SendFailureReportToServer` function at line 6 to send the content of the buffer (i.e., address book) to the server. Finally, the app resumes the normal execution by returning the control back to line 3. Note that the attacker will avoid revealing the above behavior to Apple and only exploit the vulnerabilities after the app has passed the app review. Malicious developers can freely design the vulnerabilities to bootstrap the attacks. For instance, the app can deliberately leak its memory layout information to the remote server so that ASLR is completely ineffective. Based on the memory layout information, attackers can launch attacks by reusing the exiting code inside the app. As a result, DEP and code signing cannot prevent the exploit. Furthermore, by using iOS private APIs, attackers can accomplish more sophisticated attacks, even though the app runs in the sandbox. In other words, once the app gets installed, existing security mechanisms on iOS will be of no defense against the attack. ### 3.2 Attack Scheme Overview The high level idea of our attack scheme is very intuitive. The attacker creates a normal app in which he plants vulnerabilities and hides code gadgets along side the normal functionalities. After the app passes Apple’s app review and gets installed on victims’ devices, the attacker exploits the vulnerabilities and assembles the gadgets in a particular order to perform malicious operations. For our attack to be successful, the planted vulnerabilities should allow us to defeat the ASLR, DEP, and code signing mechanisms in iOS, and at the same time be hardly detectable. To this end, we design an information leakage vulnerability through which the app deliberately leaks its partial runtime memory layout informa- tion to the remote attacker. Thus, the attacker can infer the locations of the pre-deployed gadgets, making ASLR useless. Next, we plant a buffer overflow vulnerability in the app through which the attacker can smash the stack layout and hijack the app’s control flow. The carefully designed stack layout will chain together the gadgets to accomplish malicious tasks. To avoid the vulnerabilities from being detected in the review process, the communication between the app and the server is encrypted, and all the vulnerabilities have special trigger conditions. Considering the fact that no source code but only the executable is provided to the review process, even if advanced vulnerability detection technologies like fuzz testing and dynamic symbolic execution are employed, it is unlikely for app review process to discover artificially planted and obscured vulnerabilities. Finally, the hidden gadgets should be discretely distributed in the app and mingled with the normal functionalities, without explicit control flow or data flow connections. To do this, we create a number of infeasible branches across the entire code space and hide gadgets under these infeasible branches. In addition, we organize the common operations useful for both legitimate and malicious functionalities into individual functional gadgets. 3.3 Bypassing ASLR via Information Leakage The ASLR mechanism loads the app executable and other dynamic libraries at different random locations for each run, and this causes some difficulties in the process of chaining up our gadgets. However, since native apps are written in Objective-C, it is very easy to plant information leakage vulnerabilities to bypass ASLR and recover the addresses of our gadgets. In the following, we present two examples of how this can be achieved. First, we can take advantage of an out-of-bounds memory access vulnerability to read a function pointer, and then send the value back to the remote server. Specifically, we can use a C code snippet similar to Figure 4. In this case, the app assigns the address of a public function to the function pointer in a C structure, and pretends to transmit the user name to the server. However, the server can control the size parameter of the function and is able to accurately trigger an out-of-bounds read. As a result, the address of the public function is leaked. Based on this address, we can infer the memory layout of corresponding executable file. Alternatively, we can take advantage of type confusion vulnerabilities and features of Objective-C objects to leak address information. Most objects in Objective-C programs inherit from a common class called NSObject. The first field of these objects points to a Class structure that stores information about the object's type, inheritance hierarchy, member methods, etc. These Class structures follow the same naming convention (i.e., a common prefix _objc_class_$_) and are stored at fixed offsets in the executable files. Using this information, we can also infer the address information of the entire executable file. Figure 5 demonstrates how this method works. First, we create an Objective-C object with the myObject pointer pointing to the object. After that, we convert myObject into an integer pointer by using explicit type-casting. Finally, by dereferencing the integer pointer, we copy the address value of the Class structure into the variable UID, and send it to the remote server. ```c //create an object SomeClass* myObject = [[SomeClass alloc] init]; ... int UID = *(int*)myObject; //type confusion ... SendToServer(UID); ``` Figure 5: Information Disclosure Vulnerability II Since many of the malicious operations in Table 1 rely on private APIs, some discussion on how we invoke private APIs in our attack is in order. To this end, we need to be able to dynamically load private frameworks and locate private APIs, and we employ two special APIs, dlopen() and dlsym(). dlopen() is used to load and link a dynamic library specified by filename and return an opaque handle for the library. dlsym() is used to get the address of a symbol from a handle returned from dlopen(). These two functions are implemented in a library named libdyld.dylib. Since there is no evidence to show that the exported APIs in this library can be used by third-party apps, we should avoid directly referencing to any APIs in this library. Fortunately, we find that both APIs are commonly used by public frameworks due to the need for dynamically loading shared libraries and obtaining the absolute addresses of symbols in the libraries. In particular, in order to support PIE (Position Independent Executable), public frameworks invoke imported APIs through trampoline functions. The trampoline functions here consist of a short sequence of instructions that first load the absolute address of a specific function from an indirect symbol table and then jump to that address. The indirect symbol table is initially set up by the linker at runtime. Therefore, if we can identify the trampolines for dlopen and dlsym in a public framework, our app can use the trampolines to indirectly invoke dlopen and dlsym. The task of identifying usable trampolines is simple. With the help of a debugger, we set function breakpoints at dlopen and dlsym and run a test app on a physical device. When the debug session hits a breakpoint, we examine the call stack to find out the trampoline function and its relative offset to the beginning of the module. Thanks to the fact that ASLR on iOS work at the granularity of modules, we can always infer the addresses of these trampolines from the address of a public function in the same module leaked by our Jekyll app using the vulnerabilities described before. Finally, we note that trampolines for dlopen and dlsym can be found in many essential frameworks, such as UIKit and CoreGraphics. ### 3.4 Introducing New Execution Paths via Control-Flow Hijacking A key design of our attack scheme is to dynamically introduce new execution paths that do not exist in the original app to perform the malicious operations. In order to achieve this, we plant a vulnerability in the Jekyll app, through which we can corrupt data on the stack and overwrite a function return address (or a function pointer). When the function returns, instead of returning to the original call site, the execution will proceed to a program point that is specified by the altered return address on the stack. Although iOS employs the Stack-Smashing Protector method to detect stack-based overflows, we can accurately overwrite the function return address without breaking the stack canary. ```c void vulnerableFoo(int i, int j){ int buf[16]; ... if(fakeChecks(i)) buf[i]= j; //overwrite return address ... return; } ``` Figure 6: Control Flow Hijacking Vulnerability Specifically, we use an out-of-bounds write vulnerability as shown in Figure 6 to hijack the control flow. In this case, both i and j are controlled by the attacker. Variable i is used to index a local integer array. Since the offset from the starting address of this local array to the memory slot for the function’s return address is fixed, a carefully crafted i can overwrite the return address via an array element assignment without breaking the stack canary [10]. We can also add fake boundary checks on i in the function to prevent the vulnerability from being easily detected. The new return address stored in j points to a gadget that shifts the stack frame to a memory region storing data supplied by the attacker. After that, the new stack layout will chain the gadgets together. By using the existing code in the app, we can defeat DEP and code signing. Since our method for introducing new execution paths is essentially return-oriented-programming, interested readers are referred to [15] and [33] for the details of ROP on the ARM platform. ### 3.5 Hiding Gadgets In traditional ROP attack scenarios, attackers have to search for usable gadgets from existing binary or libraries using the Galileo algorithm [44]. However, in our case, the attacker is also the app developer, who can freely construct and hide all necessary gadgets, either at the basic block or function level. This advantage makes our attacks significantly less difficult and more practical to launch than ROP attacks. For the common functional units (such as converting a char* to NSString and invoking a function pointer), which are useful for both malicious and legit operations of the app, we implement them in individual functions. As a result, we can simply reuse such functions in our attack based on the return-to-libc like exploitation technique. For the special gadgets that are not easily found in existing code, we manually construct them by using ARM inline assembly code [32] and hide them in infeasible branches. In our Jekyll app, we have planted and hidden all gadgets that are required by traditional ROP attacks [15], such as memory operations, data processing (i.e., data moving among registers and arithmetic/logical operations), and indirect function calls. To create the infeasible branches, we use the opaque constant technique [34]. For instance, in Figure 7 we set a variable to a non-zero constant value derived from a complicated calculation, and perform a fake check on that variable. Since the compiler cannot statically determine that the variable holds a constant value, it will generate code for both branches. As a result, we can reliably embed the gadgets using similar techniques. Finally, we will conclude this section with a concrete example of our ROP attack. Figure 8 shows the original source code for dialing attack (see Section 4.2), which loads a framework into process memory, locates a private API called CTCallDial in the framework, and fi- int i = Opac_e_constant_calculation(); if(i == 0) { //hide a gadget in this branch asm volatile( "pop {r2}" "bx r2" ); } Figure 7: Hide an indirect call gadget nally invokes that function. Accomplishing the equivalent functionality through the ROP technique is very easy, because many function level gadgets are available in our Jekyll app. Specifically, we can find trampolines for dlopen and dlsym in public frameworks (see Section 3.3), and can also reuse existing code in our Jekyll app to implement the indirect call and the conversion from char* to NSString (the argument type of the function CTCallDial is NSString). 1. void* h = dlopen("CoreTelephony", 1); 2. void (*CTCallDial)(NSString*)=dlsym(h, "CTCallDial"); 3. CTCallDial(@"111-222-3333"); Figure 8: Attack code for dialing In addition to these function level gadgets, we also utilize a few simple basic block level gadgets that are used to prepare and pass function arguments, recover the stack pointer, and transfer the control back to the normal execution. For example, the first four arguments of a function on iOS are passed through the registers R0-R3. Before jumping into the target function, we can use a gadget like pop{r0,r1,pc} to set up the function’s parameters. Such block level gadgets are ubiquitous in the existing code. 4 Malicious Operations In this section, we introduce the malicious operations we can perform using Jekyll apps. We present how to post tweets and send email and SMS without the user’s knowledge in Section 4.1, describe more private APIs based attacks in Section 4.2, and demonstrate Jekyll app’s ability to exploit kernel vulnerabilities and attack other apps in Section 4.3 and Section 4.4. 4.1 Under the Hood: Posting Tweets and Sending Email and SMS Since iOS 5.0, third-party apps are allowed to send Twitter requests on behalf of the user, by using the public APIs in a framework called Twitter. After setting the initial text and other content of a tweet, the public API called by the app will present a tweet view to the user, and let the user decide whether to post it or not, as shown in Figure 9. However, we find that the tweet view in Figure 9 can be bypassed by using private APIs, i.e., our app can post tweets without the user’s knowledge. Next, we describe how we discover the private APIs needed for achieving this goal. Our intuition is that if we know the event handling function that is responsible for the “Send” button click event, our app can directly invoke that function to post the tweet, without the need to present the tweet view to the user. To do this, we created a simple app that uses the Twitter framework to post tweets, and run the app in the debug model. We developed a dynamic analysis tool based on LLDB, a scriptable debugger in the iOS SDK, to log the function invocation sequence after the “Send” button is clicked. In the following, we will present some details about our tool. In Objective-C, all object method invocations are dispatched through a generic message handling function called objc_msgSend. A method invocation expression in Objective-C like [object methodFoo:arg0] will be converted into a C function call expression like objc_msgSend(object, "methodFoo:", arg0). Moreover, iOS follows the ARM standard calling convention. The first four arguments of a function are passed through the registers R0-R3, and any additional arguments are passed through the stack. For the C function expression above, the arguments will be passed as follows: R0 stores object, R1 stores the starting address of the method name (i.e.,"methodFoo:"), and R2 stores arg0. Our dynamic analysis tool sets a conditional breakpoint at the objc_msgSend function. When the breakpoint is triggered after the user clicks the “Send” button, the tool logs the call stack, gets the target method name through the register R1, and retrieves the type information of the target object and other arguments (stored in the registers R0, R2 and R3) by inspecting their Class structures (see Section 3.3). According to the information in the log, we can easily identify the relevant Objective-C classes and private APIs for posting tweets. For instance, in iOS 6.x, we find that a tweet is composed through the method “setStatus:” in a class called SLTwitterStatus, and then is posted through the method “sendStatus:completion:” in a class called SLTwitterSession. Our Jekyll app will dynamically load the Twitter framework, create instances from these classes, and invoke private APIs to post tweets without the user’s knowledge. We also extended the idea to find critical private APIs for sending email and SMS. As in the case of posting Tweets, third-party apps are able to set the initial text and other content of an email or SMS, and present the email or SMS view to the user. In iOS 5.x, we successfully implemented the code to send email and SMS without the user’s knowledge. Specifically, we find that an email is first composed by a method of the class MessageWriter, and then is sent to a service process via an inter-process communication (IPC) interface CPDistributedMessagingCenter. Eventually, the service process will send the email out. In the case of sending SMS, we find that, the content of an SMS is first converted into an XPC message, and the XPC message is subsequently passed to an XPC service (another kind of IPC interfaces in iOS) named com.apple.chatkit.clientcomposeserver.xpc. By using such private APIs, our Jekyll app is able to compose email and SMS objects, pass them to the corresponding service processes, and automatically send them without the user’s knowledge. An independent study simultaneously reported how to send SMS in this manner; interested readers are referred to [20] for details. However, in iOS 6, Apple introduced a new concept called remote view to enhance the security of email and SMS services. Specifically, a third-party app only passes the initial content of an email or SMS to the corresponding system services. These system service processes will then generate the message view, and let the user make further changes and final decision. Since the message view runs in a separate process, the third-party app is no longer able to invoke the handler function for the “Send” button click event. 4.2 Camera, Bluetooth, Device ID, and Dialing The iOS developer community has accumulated extensive knowledge of using private APIs and proposed many attacks against jailbroken iOS devices. We integrated some previously known attacks into our Jekyll app. Since these attacks heavily use private APIs, any app that explicitly launches these attacks will most certainly be rejected by Apple. However, our Jekyll app can dynamically load the private frameworks and hide the invocations to private APIs, and successfully passes the App Review. Next, we briefly introduce the private APIs that we utilized to achieve the following tasks without alerting the users: take photos, switch on/off Bluetooth, steal the device identity information, and dial arbitrary numbers. The operations in this subsection work in both iOS 5.x and iOS 6.x. - Abuse cameras. Our Jekyll app is able to stealthily turn on the camera in iOS devices to record videos without the user’s knowledge; this can be achieved by creating and assembling the object instances of a set of classes such as AVCaptureDeviceInput and AVCaptureVideoDataOutput in the AVFoundation framework. Jekyll app can also extract every frame of a video stream and transfer the images back to the server. - Switch Bluetooth. By using the APIs in a private framework BluetoothManager, our Jekyll app can directly manipulate the Bluetooth device, such as turning it on or off. - Steal Device Identity. To obtain the device identity information, we take advantage of a private function called CTServerConnectionCopyMobileEquipmentInfo in the CoreTelephony framework. This function can return the device’s the International Mobile Station Equipment Identity (IMEI), the International Mobile Subscriber Identity (IMSI), and the Integrated Circuit Card Identity (ICCID). - Dial. By invoking the private API CTCallDial in the CoreTelephony framework, our Jekyll app can dial arbitrary numbers. Note that, this API supports to dial not only phone numbers, but also GSM service codes [3] as well as carrier-specific numbers. For instance, by dialing *21*number#, Jekyll app can forward all calls to the victim’s phone to another phone specified by number. 4.3 Exploiting Kernel Vulnerabilities Since they run directly on iOS, native apps are able to directly interact with the iOS kernel and its extensions, making the exploitation of kernel vulnerabilities possible. Even though the sandbox policy limits third-party apps to only communicate with a restricted set of device drivers, and thus significantly reduces the attack surface for kernel exploitation, security researchers still managed to find vulnerabilities in this small set of device divers [49]. In our Jekyll app, we hide the gadgets that can enable us to communicate with the accessible device drivers. Specifically, Jekyll app can dynamically load a framework called IOKit, in which Jekyll app further locates the required APIs such as IOServiceMatching, IOServiceOpen and IOConnectCallMethod to create and manipulate connections to device drivers. Therefore, our Jekyll app provides a way for attackers to exploit kernel vulnerabilities. We demonstrate this by exploiting a kernel NULL pointer dereference vulnerability in iOS 5.x, disclosed in [49]. The exploitation of this vulnerability causes the iOS devices to reboot. 4.4 Trampoline Attack Due to the sandboxing mechanism, iOS apps are restricted from accessing files stored by other apps. However, iOS provides a form of inter-process communication (IPC) among apps using URL scheme handlers. If an app registers to handle a URL type, other apps can launch and pass messages to this app by opening a URL scheme of that type. The http, mailto, tel, and sms URL schemes are supported by built-in apps in iOS. For example, an app opening a http URL will cause the built-in web browser Mobile Safari to launch and load the webpage. Since attackers can fully control the content in a URL request, our Jekyll app has the ability to attack other apps that have vulnerabilities when handling malformed URL requests. In our proof-of-concept Jekyll app, we demonstrated an attack against Mobile Safari; in particular, we prepared a web page containing malicious JavaScript code that can trigger an unpatched vulnerability in Mobile Safari. Through our Jekyll app, we can force the victim’s Mobile Safari to access this web page. Finally, Mobile Safari will crash when loading the webpage due to a memory error. JailbreakMe [1], a well-known jailbreak tool, completes the untethered jailbreak through exploiting a vulnerability in Mobile Safari and then exploiting a kernel vulnerability. If new vulnerabilities in Mobile Safari are disclosed by other researchers in the future, we can simply take advantage of these new vulnerabilities to launch similar powerful attacks. 5 Jekyll App Implementation We have implemented a proof-of-concept Jekyll app based on an open source news client called News:yc [2]. The original News:yc app fetches news from a server, and allows the user to share selected news items through email. We modified News:yc in several places. First, we configured it to connect to a server controlled by us. Second, we planted vulnerabilities and code gadgets in the app. These vulnerabilities are triggerable by special news contents, and the code gadgets support all the malicious operations listed in Table 1. Third, we modified the app to use a secure protocol that provides authenticated and encrypted communication, so that the app client only accepts data from our server. In addition, the server was configured to deliver exploits only to the clients from specific IP addresses, which ensures that only our testing devices can receive the exploits. Figure 10.a shows the snapshot of the app. ![Figure 10: Snapshots of the app](image) We submitted the app to Apple and got Apple’s approval after 7 days. Figure 11 shows the approval notification from Apple. Once the app was on App Store, we immediately downloaded it into our testing devices and removed it from App Store. We have data to show that only our testing devices installed the app. The server has also been stopped after we finished the testing. The testing results are summarized in Table 1. By exploiting the vulnerabilities and chaining the planted gadgets, we can send email and SMS and trigger a kernel vulnerability on iOS 5.x, and post tweets, record videos, steal the device identity, manipulate bluetooth, dial arbitrary number, and attack Mobile Safari on both iOS 5.x and iOS 6.x. We show the attack of stealing device identity in Figure 10.b. We have made a full disclosure of our attack to Apple. 6 Related Work Jailbreak, which obtains the root privilege and permanently disables the code signing mechanism, represents the majority of efforts to attack iOS [38]. Since jailbreak usually relies on a combination of vulnerabilities found in the iOS kernel, the boot loaders, and even the firmware, Apple and hackers have long played a cat-and-mouse game. However, due to Apple’s increasing efforts... to secure iOS and keep fixing known bugs, it is becoming extremely difficult to find exploitable vulnerabilities in newer versions of iOS. Our attack does not try to achieve a jailbreak on iOS devices, instead, it takes advantage of the intrinsic incapability of the App Review process and the design flaws of iOS to deliver various types of malicious operations remotely, which cannot be trivially addressed via software updates. Note that, it is possible for Jekyll apps to take advantage of the vulnerabilities used by jailbreak tools to compromise iOS devices. C. Miller [37] recently discovered a vulnerability in the iOS code signing mechanism, which allows attackers to allocate a writeable and executable memory buffer. He demonstrated that, by exploiting this vulnerability, a malicious app can safely pass the app review process if it generates malicious code only at runtime. However, Apple had instantly fixed the issue, and therefore, effectively blocked apps that use similar methods to load or construct malicious code during runtime. In contrast, Jekyll apps do not hinge on specific implementation flaws in iOS. They present an incomplete view of their logic (i.e., control flows) to app reviewers, and obtain the signatures on the code gadgets that remote attackers can freely assemble at runtime by exploiting the planted vulnerabilities to carry out new (malicious) logic. In addition, the lack of runtime security monitoring on iOS makes it very hard to detect and prevent Jekyll apps. Considering that ROP attacks can achieve Turing-completeness [9] and automatic ROP shellcode generation is also possible [29, 43], the attack scheme in this paper significantly generalizes the threat in [37]. Return-Oriented Programming (ROP) [44], without introducing new instructions, carries out new logic that is not embodied in the original code. ROP and its variants [11, 29, 33, 36] allow attackers to create new control flows of a program at runtime via code gadget rearrangements, obviating the need for code injections that are prevented by DEP and code signing. Jekyll apps also employ code gadget rearrangements to alter runtime control flows—an idea inspired by ROP. However, our attack differs from ROP in both the assumption and the goal. Traditional ROP attack targets at programs that are out of the attacker’s control and its power is often limited by the availability of useful code gadgets. In comparison, Jekyll apps are created and later exploited by the same person, who has the ultimate control of the gadget availability. On the other hand, traditional ROP attackers have no concern about hiding potential code gadgets and their inter-dependencies, whereas we do so that Jekyll app can bypass existing and possible detections. Currently, we need to manually construct the ROP exploits that are responsible for chaining gadgets together. However, previous studies [29, 43] have demonstrated the possibility of automatically generating ROP shellcode on the x86 platform. We leave the automatic ROP shellcode generation for Jekyll apps as future work. In addition, M. Prati [40] proposed a way to hide ROP gadgets in open source projects with a purpose to evade the code audit of the projects. This implies that even Apple could audit the source code of third-party apps in the future, detecting the hidden gadgets is still quite challenging. Jekyll apps also share a common characteristic with trojan and backdoor programs [13], that is, the malice or vulnerabilities of attacker’s choice can be freely planted into the program, which later cooperates with the attacker when installed on a victim’s device. In fact, Jekyll app can be deemed as an advanced backdoor app that stays unsuspicious and policy-abiding when analyzed during the app review process, but turns into malicious at runtime only when new control flows are created per attacker’s command. Thus far Apple’s strict app publishing policies and review process [5] have helped keep malicious apps out of iOS devices [41]. Automated static analysis methods, such as [17, 26], were also proposed to assist the review process in vetting iOS apps. However, as we have demonstrated with our design and evaluation of Jekyll apps, malicious apps can easily bypass human reviewers and automatic tools if their malicious logic is constructed only at runtime. This demonstrates the limitations of Apple’s current strategy that solely relies on app reviewing to find malicious apps and disallows any form of security monitoring mechanism on iOS devices. 7 Discussion In this section, we discuss a number of possible countermeasures against Jekyll apps and analyze the effectiveness as well as the feasibility of these countermeasures. 7.1 Possible Detection at App Review Stage Two possible directions that the app reviewers may pursue to detect Jekyll apps are: (i) discover the vulnerabilities we plant; (ii) identify the code gadgets we hide. We emphasize that discovering software vulnerabilities using static analysis alone is fundamentally an undecidable problem [35], even without considering the powerful adversary in our attack who can arbitrarily obscure the presence of the vulnerabilities. Dynamic analysis based vulnerability detection approaches can also be easily defeated by using complicated trigger conditions and encrypted input data. We argue that the task of making all apps in App Store vulnerability-free is not only theoretically and practically difficult, but also quite infeasible to Apple from an economic perspective because such attempts will significantly complicate the review tasks, and therefore, prolong the app review and approval process that is already deemed low in throughput by third-party app developers. To simplify the engineering efforts, our current implementation of Jekyll app directly includes some code gadgets in an isolated fashion (i.e., unreachable from program entry points), essentially leaving them as dead code that may be detectable and in turn removed during app review process. However, given our freedom to craft the app, it is totally possible to collect all gadgets from the code that implements the legitimate functionalities of the app, without the need to hide extra gadgets as dead code. In summary, even though the hidden vulnerabilities and gadgets might take unusual forms comparing with regular code, accurately detecting Jekyll apps (e.g., based on statistical analysis) is still an open challenge. Thus, detecting Jekyll apps in App Review process via vulnerability discovery or gadgets identification is not a feasible solution. 7.2 Possible Mitigation through Improved or New Runtime Security Generally, improving the existing security mechanisms or introducing more advanced runtime monitoring mechanisms can limit Jekyll apps’ capability to perform malicious operations. However, completely defeating Jekyll apps is not easy. - A natural idea to limit Jekyll apps is to technically prevent third-party apps from loading private frameworks or directly invoking private APIs. However, Jekyll apps do not have to dynamically load private frameworks. As we discussed, since many public frameworks rely on these private frameworks, Jekyll apps can reasonably link to these public frameworks so that certain private frameworks will also be loaded into the process space by the system linker. A more strict execution environment like Native Client [50] can help prevent the apps from directly invoking private APIs by loading private frameworks into a separate space and hooking all invocations. However, since iOS public and private frameworks are tightly coupled, applying such a mechanism to iOS is quite challenging. - Fine-grained ASLR such as [27, 39, 46] can greatly reduce the number of gadgets that we can locate during runtime even with the help of the planted information leakage vulnerabilities. Although expanding the scale and refining the granularity of the information leakage can help obtain a detailed view of the memory layout, Jekyll apps may lose the stealthiness due to the increased exposure of the vulnerabilities and increased runtime overhead. - A fine-grained permission model, sandbox profile, or user-driven access control policy [28, 42] can also help limit the damages done by Jekyll apps. However, simply using Android-like permission system will not be an unsurmountable obstacle to Jekyll apps. As long as a Jekyll app can reasonably require all permissions, it can still carry out certain attacks successfully. A user-driven access control model [28, 42] also cannot stop Jekyll apps from abusing the access already granted and attacking other apps or the kernel. Take the greeting card app in Section 3.1 as an example. After the user allows this app to access the address book, it is very hard to prevent the app from leaking the information. - Since Jekyll apps heavily rely on control flow hijacking vulnerabilities, advanced exploit prevention techniques such as CFI [6] may effectively limit Jekyll apps. CFI ensures that runtime control-flow transfers conform with the rules that are derived from the static analysis of the program and the constraints inferred from the execution context. MoCFI [14] and PSiOS [47] brought the same idea to iOS with a caveat that they require jailbroken devices. Despite its high performance overhead and low adoption rate in practice, CFI is generally deemed effective against conventional ROP attacks, which partially inspired the design of Jekyll apps. In principle, if properly implemented and deployed on iOS, CFI can significantly increase the complexity of designing Jekyll apps and force attackers to trade code flexibility for success. Although skilled attackers presumably can either employ very systematic non-control data attacks [12] to perform malicious operations or use function-level gadgets to bypass CFI, given their freedom to craft the gadgets in our attack, they may have to sacrifice the stealthiness of Jekyll apps to some extent due to the increased distinguishability caused by such techniques. - Type-safe programming languages like Java are immune to low-level memory errors such as buffer overflows. Thus, if we can enforce that third-party apps be developed in type-safe programming languages, we can prevent the problems of planted control flow hijacking or information leakage vulnerabilities in the apps. In summary, we advocate the official support for runtime security monitoring mechanisms on iOS. Our design of Jekyll apps intends to motivate such mechanisms, which can protect iOS against advanced attacks and ensure that the app review practice and regulations receive their maximum efficacy. 8 Conclusion In this paper, we presented a novel attack scheme that can be used by malicious iOS developers to evade the mandatory app review process. The key idea is to dynamically introduce new execution paths that do not exist in the app code as reviewed by Apple. Specifically, attackers can carefully plant a few artificial vulnerabilities in a benign app, and then embed the malicious logic by decomposing it into disconnected code gadgets and hiding the gadgets throughout the app code space. Such a seemingly benign app can pass the app review because it neither violates any rules imposed by Apple nor contains functional malware. However, when a victim downloads and runs the app, attackers can remotely exploit the planted vulnerabilities and in turn assemble the gadgets to accomplish various malicious tasks. We demonstrated the versatility of our attack via a broad range of malicious operations. We also discussed our newly discovered private APIs in iOS that can be abused to send email and SMS and post tweets without the user’s consent. Our proof-of-concept malicious app was successfully published on App Store and tested on a controlled group of users. Even running inside the iOS sandbox, the app can stealthily post tweets, take photos, gather device identity information, send email and SMS, attack other apps, and even exploit kernel vulnerabilities. Acknowledgements We thank our shepherd Benjamin Livshits and the anonymous reviewers for their valuable comments. This material is based upon work supported in part by the National Science Foundation under grants no. CNS-1017265 and no. CNS-0831300, and the Office of Naval Research under grant no. N000140911042. Any opinions, findings, and conclusions or recommendations expressed in this material are those of the authors and do not necessarily reflect the views of the National Science Foundation or the Office of Naval Research. References D. Larochelle and D. Evans. Statically detecting likely buffer
{"Source-Url": "https://www.longlu.org/downloads/sec13-paper_wang_2.pdf", "len_cl100k_base": 11907, "olmocr-version": "0.1.43", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47824, "total-output-tokens": 14100, "length": "2e13", "weborganizer": {"__label__adult": 0.0004992485046386719, "__label__art_design": 0.0004019737243652344, "__label__crime_law": 0.0014209747314453125, "__label__education_jobs": 0.000530242919921875, "__label__entertainment": 0.00010198354721069336, "__label__fashion_beauty": 0.0002275705337524414, "__label__finance_business": 0.0003387928009033203, "__label__food_dining": 0.0002887248992919922, "__label__games": 0.0013608932495117188, "__label__hardware": 0.0041351318359375, "__label__health": 0.0004014968872070313, "__label__history": 0.0002841949462890625, "__label__home_hobbies": 0.00010389089584350586, "__label__industrial": 0.0004582405090332031, "__label__literature": 0.00028252601623535156, "__label__politics": 0.00031566619873046875, "__label__religion": 0.0004360675811767578, "__label__science_tech": 0.06915283203125, "__label__social_life": 8.51750373840332e-05, "__label__software": 0.02142333984375, "__label__software_dev": 0.89697265625, "__label__sports_fitness": 0.0002675056457519531, "__label__transportation": 0.000507354736328125, "__label__travel": 0.00013697147369384766}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63708, 0.02416]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63708, 0.24413]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63708, 0.90817]], "google_gemma-3-12b-it_contains_pii": [[0, 379, false], [379, 4512, null], [4512, 9716, null], [9716, 14849, null], [14849, 20264, null], [20264, 23083, null], [23083, 27758, null], [27758, 32931, null], [32931, 36978, null], [36978, 41926, null], [41926, 46292, null], [46292, 51017, null], [51017, 56156, null], [56156, 61314, null], [61314, 63708, null]], "google_gemma-3-12b-it_is_public_document": [[0, 379, true], [379, 4512, null], [4512, 9716, null], [9716, 14849, null], [14849, 20264, null], [20264, 23083, null], [23083, 27758, null], [27758, 32931, null], [32931, 36978, null], [36978, 41926, null], [41926, 46292, null], [46292, 51017, null], [51017, 56156, null], [56156, 61314, null], [61314, 63708, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63708, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63708, null]], "pdf_page_numbers": [[0, 379, 1], [379, 4512, 2], [4512, 9716, 3], [9716, 14849, 4], [14849, 20264, 5], [20264, 23083, 6], [23083, 27758, 7], [27758, 32931, 8], [32931, 36978, 9], [36978, 41926, 10], [41926, 46292, 11], [46292, 51017, 12], [51017, 56156, 13], [56156, 61314, 14], [61314, 63708, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63708, 0.0543]]}
olmocr_science_pdfs
2024-11-22
2024-11-22
6e67f85fb8e5aa8ef3537254babb2a2be349e31f
Continuous Delivery Practices in a Large Financial Organization Carmine Vassallo¹, Fiorella Zampetti¹, Daniele Romano², Moritz Beller³, Annibale Panichella³, Massimiliano Di Penta¹, Andy Zaidman³ ¹University of Sannio, Italy, ²ING NL, Amsterdam, The Netherlands, ³Delft University of Technology, The Netherlands Abstract—Continuous Delivery is an agile software development practice in which developers frequently integrate changes into the main development line and produce releases of their software. An automated Continuous Integration infrastructure builds and tests these changes. Claimed advantages of CD include (i) early discovery of (integration) errors, reduced cycle time, and better adoption of coding standards and guidelines. This paper reports on a study in which we surveyed 152 developers of a large financial organization (ING Netherlands), and investigated how they adopt a Continuous Integration and delivery pipeline during their development activities. In our study, we focus on topics related to managing technical debt, as well as test automation practices. The survey results shed light on the adoption of some agile methods in practice, and sometimes confirm, while in other cases, confute common wisdom and results obtained in other studies. For example, we found that refactoring tends to be performed together with other development activities, technical debt is almost always “self-admitted”, developers timely document source code, and assure the quality of their product through extensive automated testing, with a third of respondents dedicating more than 50% of their time to do testing activities. Index Terms—Continuous Delivery, Continuous Integration, DevOps, Agile Development, Technical Debt, Refactoring, Testing, Test-Driven Development I. INTRODUCTION Continuous Integration (CI) was originally introduced by Grady Booch in 1991 [1], and came into fashion as one of the twelve Extreme Programming practices in 1997 [2]. Fowler defines CI as [3]: A software development practice where members of a team integrate their work frequently, usually each person integrates at least daily – leading to multiple integrations per day. Each integration is verified by an automated build (including test) to detect integration errors as quickly as possible. CI has multiple assumed benefits, for example, that integration errors among different components of a software application can be detected earlier, easier, and with less manual effort [4]. At the heart of CI stands a testing phase, possibly in multiple integration environments, in which unit, integration, system, and even acceptance tests can automatically be executed [5], [3]. This is complemented by running Automated Static Analysis Tools (ASATs), e.g., FindBugs, Checkstyle, or JSHint as part of the CI can augment the dynamic testing phase [6]. In addition to these checks of code and system quality, CI is said to improve release frequency and predictability [7], increase developer productivity [8] and improve communication [9], hence reducing the time-to-market and allowing users to benefit from continuous updates of their software. Continuous Delivery (CD) is the development practice that enables frequent releases by help of a CI process [10]. Ståhl and Bosch observed that CI, and by extension CD, have become increasingly popular in software development [11]. However, Ståhl and Bosch observed that there is not one homogeneous practice of continuous integration, indeed there are variations points with the term continuous integration acting as an umbrella for a number of variants [11]. Moreover, they showed that there is no clear insight into how the practice of CD influences other aspects of the development process. The goal of this paper is thus to shed light on the interaction between CI and CD from the aspect of (i) the general development process, (ii) managing technical debt, (iii) testing activities, (iv) technical questions about the CI infrastructure. To bootstrap this investigation, one of the authors spent three months as an intern in a large financial organization, namely ING Netherlands (https://www.ing.nl, in the following referred as ING NL) and observed how their newly adopted CD environment enables developers to run their own operations, called DevOps [12]. Based on these inside observations by an outsider to ING NL, we have designed a survey in which we asked developers about various practices they adopted in the CD pipeline. By consulting and embedding an external technical expert without domain knowledge, ING NL wanted to gain an independent understanding of their process and identify potential areas of improvement with regard to testing and managing technical debt. Paper Structure. Section II provides an overview of the CD pipeline in ING NL. Section III defines the study, formulates its research questions, and details its planning. Then, Section IV reports and discusses the study results. Threats to validity of the conducted studies are then discussed in Section V, while Section VI discusses related literature on CD and build-release management. Finally, Section VII concludes the paper. II. CONTINUOUS DELIVERY IN ING NL ING is a large financial organization with about 94,000 employees and over 67 million customers in more than 40 countries. Nine years ago, ING NL realized the need to fundamentally change the organization of its Information Technology (IT) department. The main rationale was to bridge the gap from the IT and ING NL’s core business. Before that, the IT activities were mainly outsourced, which created managerial effort and costs, while taking resources away from the development. Moreover, the previously adopted development process exhibited a communication gap between the department aimed at “changing the business”, i.e., changing its software, and the department aimed at ‘running the business’, i.e., operating and maintaining the software. Such a gap was mainly bridged by complex processes and procedures for managing changes. This rigor was mainly introduced to ensure stability of the software systems being developed. To “change the business, the focus was on guaranteeing short release cycles. This created conflicting objectives between developers (“Devs”) whose goal it was to meet deadlines, and operators (“Ops”) whose goal it was to reduce the risk of runtime incidents. The development process changed when ING NL decided to introduce a mobile application for online banking, since that long development cycles would have led to an outdated application. For this reason, development activities were changed from the previous outsourcing model to a development process in which the development was internal to the company. When changing the development process, DevOps teams have been introduced. Such teams take charge of the application over its whole lifetime, i.e., during development and operations. The next step was the introduction of a CD pipeline enforcing an agile development process to reduce the testing and deployment effort and duration, especially because such activities were used to be mainly manual work for two separate teams. Fig. 1 depicts the CD pipeline that has been put in place in ING NL. The figure shows, the pipeline is composed of two layers. A base layer (depicted in the bottom), which is a typical CD pipeline, and an additional layer (top) which deals with continuous monitoring. As soon as the developer pushes a commit, this is detected by the CI server, Jenkins [13], and triggers the software build. Its main task is to run build scripts, mainly Maven scripts, but also, for a minority of projects, Ant, Gradle and other build scripts. Similar to most Open-Source CI builds [5], builds at ING NL are considered broken for a number of reasons, ranging from traditional compiling errors to failing test cases, up to software quality problems – e.g., the presence of a code smell like too high McCabe cyclomatic complexity – detected by ASATs. In ING NL, such the ASAT of choice is SonarQube [14]. In case the build succeeds, the artifacts are stored in the Repository stage using the Artifactory [15]. This introduces several advantages, such as the possibility of implementing caching mechanisms for rapid application re-deployment. Once the Repository stage is reached, the application is ready to be deployed in different environments, i.e., DEV (development), TST (testing), ACC (acceptance), and PRD (production). The monitoring layer in the pipeline collects (top part in Fig. 1) a series of metrics for evaluating the CD pipeline performance. This comprises the three phases of (i) instantiating a CD pipeline, (ii) performing measurements on the pipeline, and (iii) learning from such measurements to further improve the pipeline. The monitoring layer is detailed in Fig. 2. It is composed of one event bus, implemented using Apache Kafka [16], and aimed at collecting events (e.g., build failures or successes) from the pipeline and storing them in a database, implemented using MongoDB [17]. Then, the information stored in the database is utilized by different monitoring tools, shown in the top part of Fig. 2. The system health monitoring tool monitors the pipeline’s software and hardware resources and its primary purpose is ensuring the pipeline’s availability. The automated acceptance criteria tool aims at checking whether the release meets the acceptance criteria defined by the organization, before promoting it to the ACC or PRD stage. The automated team maturity and test analytics tools inform teams about releases (e.g., mean cycle time a team is able to handle) and statistics about test execution, such as the percentage of failed tests. The whole monitoring approach reflects the Lean cycle [18], in which DevOps engineers continuously learn by observing metrics and adapt the pipeline when needed. ING NL has monitored the effect of CD adoption in terms of costs, productivity, and customer satisfaction. In three years, from 2011 to 2013, ING NL has increased the number of delivered function points by 300% and reduced the cost of a single function point to one third. Additionally, between 2012 and 2014, the release frequency has doubled, reaching one release every four days. III. STUDY DESIGN The goal of this study is to better understand the implementation of CD practices in industry, by surveying how software engineers use relevant methods and tools during the development process. The context is the CD pipeline of a large financial organization (ING NL). More specifically, the study aims at addressing the following four research questions: - **RQ1**: What are general development practices within the Continuous Delivery pipeline? This research question is preliminary to the ones going deeper into the CD process, and mainly aims at investigating to what extent developers share a common development methodology, and how they plan, schedule, and monitor their development activities. - **RQ2**: What are the practices adopted to manage technical debt? This research question aims at understanding how developers manage technical debt by commenting source code, by reviewing it, and by performing any sort of static analysis or metric extraction. - **RQ3**: What are the testing practices adopted within the Continuous Delivery pipeline? This research question aims at understanding how testing is framed within the software development process, e.g., whether DevOps adopt a Test-Driven Development approach [19]. - **RQ4**: How is Continuous Integration performed? This research question investigates on the developers’ attitude to coordinate changes through the CD infrastructure, including the use of private builds and the priority given to fix build breakages. A. Context Selection As a population of candidate participants to the survey, we selected a total of 176 DevOps engineers belonging to various development teams of ING NL. Such participants have been identified through the projects’ mailing lists. B. Survey Design The four research questions have been addressed by means of a survey. The survey has been designed by the authors observing the development activities (by looking at the lifecycle of user stories and participating in daily stand-up meetings), and talking with developers to get insights about the CD pipeline and the way it has been implemented in ING NL. The survey also addresses specific knowledge needs at ING NL, triggered by one of the authors who is affiliated with ING NL. The survey is organized into four sections, plus a preliminary section aimed at investigating demographic characteristics of the respondents (age, years of experience, years in ING NL, and technical skills). Overall, it consists of 48 questions, plus five demographics questions. The questionnaire allowed the respondent to select among one or more answers (in most cases multiple answers were allowed), and if needed to provide a textual answer (i.e., by selecting “Other” among the options). In Tables 1–4, we give an abbreviated summarization of the questions we asked developers.1 Table I reports the questions aimed at addressing **RQ1**. As it can be noticed, besides the first question, mainly aimed at understanding whether DevOps engineers share the methodology being adopted, all other questions clearly refer to agile development practices and in particular to Scrum [20]. For example, we ask questions about sprint planning and user story progress monitoring, but also specific questions about how DevOps manage issues and schedule/perform refactoring actions. We asked specific questions about refactoring as in this study we were particularly interested to understand activities related to technical debt management. Specific questions about managing technical debt – reported in Table II – compose the second part of the survey, aimed at addressing **RQ2**. We ask questions about (i) how developers document source code by means of comments, (ii) how they perform code review, (iii) what kinds of problems do they detect by means of code review and using automated smell detection tools, as well as how they remove problems by means of refactoring, and (iv) whether they perceive that smells are usually introduced because of deadline pressure. The third part of the survey aims at addressing **RQ3** and features questions about testing activities, as shown in Table III. After having asked a question aimed at understanding whether DevOps engineers use TDD, we asked questions about the effort spent on writing test cases and to what extent test cases are kept up-to-date. Also, we ask questions about information and strategies being used to derive test cases for different testing levels. Then we ask questions about test execution (i.e., to what extent is this done within private builds or on the CI server), and how developers assess test effectiveness and deal with low coverage. Finally, the fourth part of the survey addresses **RQ4** and is composed of questions (see Table IV) about (i) promotion policies2, (ii) how DevOps engineers handled build failures, (iii) how they used branches and (iv) how frequently they pushed their changes. C. Survey Operation The survey questionnaire was uploaded onto a survey management platform internal to ING NL, and the candidate participants were invited using an invitation letter explaining the general goals of the survey, its length and estimated time to complete, and highlighting how its results have the purpose of understanding the CD process within ING NL, also in order to identify directions for its improvement. Respondents had a total of three weeks to participate to the survey, and a reminder to those who did not participate yet was sent every week. In total, we obtained 152 filled questionnaires, i.e., we achieved a return rate of 85%. We left respondents the choice not to answer a question. The number of answers for each question is reported in the last column of the tables enumerating the questions. Overall, the median number of responses per question was 129 for **RQ1**. --- 1The original survey with all questions is available at https://figshare.com/s/fa8c4e11fe9fa4b8f8cb 2A promotion entails the selection of a release candidate and subsequent deployment to the correct environment [21]. ### TABLE I **Development process - Questions (S/M/R stands for Single, Multiple, or Ranking answer question).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q1.1</td> <td>What is your software development methodology?</td> <td>S</td> <td>150</td> </tr> <tr> <td>Q1.2</td> <td>Is the product vision always clear to you? Why? Why not?</td> <td>S,M</td> <td>149</td> </tr> <tr> <td>Q1.3</td> <td>Do you prefer to use a physical board or an electronic one? Why?</td> <td>S,M</td> <td>125</td> </tr> <tr> <td>Q1.4</td> <td>During a sprint why do you add some tasks to the already planned ones?</td> <td>R</td> <td>138</td> </tr> <tr> <td>Q1.5</td> <td>Which is the main topic you address during the sprint retrospective?</td> <td>S</td> <td>138</td> </tr> <tr> <td>Q1.6</td> <td>Which is the average percentage of completed user stories at the end of a sprint?</td> <td>S</td> <td>138</td> </tr> <tr> <td>Q1.7</td> <td>Which Scrum metrics do you usually collect?</td> <td>M</td> <td>128</td> </tr> <tr> <td>Q1.8</td> <td>Which is the main reason why a “done” user story comes back to “in-progress”?</td> <td>S</td> <td>130</td> </tr> <tr> <td>Q1.9</td> <td>Do you consider non-functional requirements as definition of “done” of a user story?</td> <td>S</td> <td>130</td> </tr> <tr> <td>Q1.10</td> <td>Which kind of non-functional requirements do you consider as definition of “done” of a user story?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q1.11</td> <td>You detect a defect that was previously resolved: how to deal with it?</td> <td>S</td> <td>129</td> </tr> <tr> <td>Q1.12</td> <td>Do you usually schedule refactoring tasks? Why?</td> <td>S</td> <td>129</td> </tr> <tr> <td>Q1.13</td> <td>Which priority do you usually assign to refactoring tasks?</td> <td>S</td> <td>128</td> </tr> <tr> <td>Q1.14</td> <td>How frequently are refactoring tasks included in other tasks?</td> <td>S</td> <td>128</td> </tr> <tr> <td>Q1.15</td> <td>Which is the average percentage of scheduled refactoring tasks that are completed at the end of a sprint?</td> <td>S</td> <td>123</td> </tr> </tbody> </table> ### TABLE II **Managing technical debt - Questions (S/M/R stands for Single, Multiple, or Ranking answer question).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q2.1</td> <td>To what extent do you introduce method and class level comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.2</td> <td>To what extent do you introduce statement level comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.3</td> <td>To what extent do you update code documentation/comments?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.4</td> <td>Do you perform code review? Why?</td> <td>S,M</td> <td>116</td> </tr> <tr> <td>Q2.5</td> <td>How do you usually detect code smells?</td> <td>M</td> <td>110</td> </tr> <tr> <td>Q2.6</td> <td>Which of those problems do you usually detect? (null pointers, interface misuse, memory leaks, unreachable code, unused variables, uninitialized variables)</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.7</td> <td>Which of these bad design/implementation choices do you usually detect during code reading? (function having huge size, method with many responsibilities, high module coupling, module exposing its attributes)</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.8</td> <td>Which source code metrics do you usually look at?</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q2.9</td> <td>Do you sometimes do poor implementation choices because of near deadline?</td> <td>S</td> <td>116</td> </tr> <tr> <td>Q2.10</td> <td>Do you usually use a tool in order to do code refactoring? Why?</td> <td>S</td> <td>116</td> </tr> </tbody> </table> ### TABLE III **Testing - Questions (S/M/R stands for Single, Multiple, or Ranking answer question).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q3.1</td> <td>Do you use TDD (Test Driven Development)? Why/why not?</td> <td>S,M</td> <td>125</td> </tr> <tr> <td>Q3.2</td> <td>Which percentage of your time do you spend on writing tests?</td> <td>S</td> <td>124</td> </tr> <tr> <td>Q3.3</td> <td>How frequently do you review and (if necessary) update the tests for every change to production code?</td> <td>S</td> <td>124</td> </tr> <tr> <td>Q3.4</td> <td>Do you usually test the code written earlier by others? Why (not)</td> <td>S,M</td> <td>124</td> </tr> <tr> <td>Q3.5</td> <td>Which strategy do you usually use to categorize inputs for each test case?</td> <td>S</td> <td>122</td> </tr> <tr> <td>Q3.6</td> <td>Which information do you need in order to perform Unit Testing?</td> <td>M</td> <td>122</td> </tr> <tr> <td>Q3.7</td> <td>Which information do you need in order to perform Integration Testing?</td> <td>M</td> <td>122</td> </tr> <tr> <td>Q3.8</td> <td>Do you usually automate the generation of the test cases?</td> <td>S</td> <td>122</td> </tr> <tr> <td>Q3.9</td> <td>In which kind of testing do you usually automate the generation of the test cases?</td> <td>M</td> <td>21</td> </tr> <tr> <td>Q3.10</td> <td>Which kinds of testing are executed automatically? Why (not)?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q3.11</td> <td>Where do you test code?</td> <td>M</td> <td>120</td> </tr> <tr> <td>Q3.12</td> <td>Which percentage of written tests are executed?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.13</td> <td>Do you always run all test cases together? Why?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.14</td> <td>How frequently do tests pass?</td> <td>S</td> <td>120</td> </tr> <tr> <td>Q3.15</td> <td>Which types of code coverage do you measure?</td> <td>M</td> <td>107</td> </tr> <tr> <td>Q3.16</td> <td>Which is the average percentage of code coverage that you usually score during unit testing?</td> <td>S</td> <td>103</td> </tr> <tr> <td>Q3.17</td> <td>How do you deal with low coverage?</td> <td>S</td> <td>103</td> </tr> <tr> <td>Q3.18</td> <td>Which of those test metrics do you find useful?</td> <td>M</td> <td>116</td> </tr> <tr> <td>Q3.19</td> <td>How do you react to a failure?</td> <td>R</td> <td>116</td> </tr> </tbody> </table> ### TABLE IV **Continuous integration - Questions (S/M/R stands for Single, Multiple, or Ranking answer question).** <table> <thead> <tr> <th>#</th> <th>Summarized Question</th> <th>S/M/R</th> <th># of Resp.</th> </tr> </thead> <tbody> <tr> <td>Q4.1</td> <td>Promotion policies: what do you do when you are ready to push code on the master branch?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.2</td> <td>How do you deal with failures at building/packaging time?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.3</td> <td>Branching issues: how do you deal with parallel development?</td> <td>S</td> <td>112</td> </tr> <tr> <td>Q4.4</td> <td>When do you usually push your changes?</td> <td>S</td> <td>112</td> </tr> </tbody> </table> TABLE V RESPONDENTS’ DEMOGRAPHICS: AGE, YEARS OF DEVELOPMENT EXPERIENCE, AND YEARS SPENT IN ING NL. <table> <thead> <tr> <th>Age</th> <th>Years of experience</th> <th>Years spent in ING NL</th> </tr> </thead> <tbody> <tr> <td>&lt; 30</td> <td>&lt; 1</td> <td>41</td> </tr> <tr> <td>30-39</td> <td>1</td> <td>19</td> </tr> <tr> <td>40-50</td> <td>2-5</td> <td>28</td> </tr> <tr> <td>&gt; 50</td> <td>6-10</td> <td>13</td> </tr> </tbody> </table> Fig. 3. Technological Knowledge. questions, 116 for RQ2, 120 for RQ3 and 112 for RQ4. Only for one question (Q3.9, dealing with specific aspects of test automation) the number of answers was below 100, i.e., 21. Both the overall return rate and the return rate for the single answers are higher than typical return rates for software engineering surveys conducted in industry, which often range between 10% and 25% [22], [23]. The high return rate gives us confidence that our survey accurately reflects the opinion of the sampled developers. IV. STUDY RESULTS In this section, we highlight key results of our study that directly address the research questions from Section III. A. Respondents’ demographics Table V and Fig. 3 report demographics information about the study respondents, and namely their age, years of experience, years spent in ING NL, and their main skills (multiple answers were allowed). Most of the respondents are relatively senior both in term of age and development experience (the majority of them has an age between 30 and 50, and over 11 years of experience). The main technological expertise they possess are related to Java or JavaScript programming, and both relational and NoSQL databases. B. RQ1: What are general development practices within the Continuous Delivery pipeline? Methodology. When we asked about the kind of methodology being adopted in the development process (Q1.1) almost all developers (97%) mentioned they use on Scrum as development methodology. At the same time, the product vision (Q1.2) is clear to 68% of the respondents only. One important reason for the lack of clarity is due to frequent changes, which are pretty common in agile development. Interestingly, while most of the respondents (69%) prefer to use an electronic Scrum board (Q1.3), there is a quite high percentage (31%) still preferring a physical Scrum board3. On the one hand, they say that an electronic Scrum board facilitates distributed team work (84%), and provides automated calculation of sprint progress metrics (59%). On the other hand, a physical board is always visible in the room (90%), and improves the team cohesion (64%). Sprint Management. Developers declared that, during a sprint, they add some tasks to the already planned ones (Q1.4). As a main reason for that, 60% of them indicate bug fixing, followed by missing detailed requirements (33%) and only 7% mentioned high-level, business requirements missing during the planning. During the sprint retrospective (Q1.5), i.e., the meeting in which the sprint activities were discussed in order to understand what went well, what went wrong, and how things can be improved for the next sprint, developers mainly discuss and try to harmonize the way they work (88%). Few responses concern bad implementation (1%), the product not meeting functional (1%) or non-functional (1%) requirements, and other issues (7%). Fig. 4 reports the average percentage of completed user stories at the end of a sprint (Q1.6). In most cases, respondents agree that no less than 80% of user stories are completed. Other than dealing with functional requirements, user story completion concerns with dealing with different kinds of non-functional requirements, where developers consider as high priority requirements security (89%), reliability (86%) and maintainability (82%). The main monitoring mechanisms for the sprint progress (Q1.7) are the sprint burn-down (60%, tracking the sprint completion), and the velocity, i.e., the number of story points [24, page 87] per hour (58%). A small percentage of respondents consider the number of defects postponed (3%), or the technical debt occurred (7%) as important indicators which are able to influence the completeness of a user story. In some cases, a completed user story may be rolled back to “in-progress” (Q1.8), but mainly because developers realize that functional (34%) or non-functional (25%) requirements are not completely implemented. Only in 22% of the cases does this occur because of changes in users’ expectations. 7 respondents (5%) explicitly specified that in case they realize changes in requirements, e.g., because of changed users’ expectations – they rather open a new user story than reopening a previously closed one. One respondent even clarified that a “done” user story should be considered to be in production already, and therefore should not be reopened again. When a previously resolved defect occurs again (Q1.11), 52% of the respondents indicate that they open a new issue anyway. This can either indicate a careful approach in which developers try to keep the new occurrence of the defect separated from the previous one. Refactoring activities. When we asked about refactoring tasks (Q1.12), 64% of respondents indicated that refactoring is usually properly scheduled. The main reasons for refactoring include improving program comprehension (87%), allowing making changes easier (77%), and help to find bugs (24%). Those who not schedule refactoring tasks, they do it either because they are too time consuming and take effort away from feature implementation tasks (27%), or because they do not clearly perceive refactoring advantages (9%). A large proportion of respondents (64%) indicate other reasons. For example, they mentioned that “refactoring is just performed as it pops up”, that they “naturally consider refactoring as part of other development tasks”, or that “code should be made maintainable right away”. Also, some respondents indicated planning reasons, i.e., part of the user story effort calculation. Last, but not least, someone indicates that all depends on the size of the refactoring activity to be performed is, i.e., small refactorings are performed together with development, whereas larger ones are kept separate. When being scheduled (Q1.13), refactoring tasks often have a medium priority (70%) than other tasks, with 9% assigning a high priority and 23% a low priority. Indeed, 42% of respondents indicate that more than 80% of the planned refactorings within a sprint are actually completed (Q1.15). Differently from what Fowler reported [25], refactoring tasks are often performed together with other tasks, as shown in Fig. 5 (Q1.14). Only 5% of respondents declare that they clearly separate refactoring from other tasks. C. RQ2: What are the practices adopted to manage technical debt? Source code comments. The first block of questions we asked about managing technical debt concerned the way and the extent to which developers comment source code. Respondents said they almost always (23%), often (34%), and sometimes (24%) introduce class-level and method-level comments (Q2.1). Instead, as expected only 3% and 15% of the respondents introduce statement-level comments always and often, respectively (Q2.2). Still, 38% of the respondents introduce them sometimes. In line with the CD process, and with the aim of preserving program understanding, 79% of the respondents’ update comments immediately when changing the source code, while 13% postpone such changes to a specific phase aimed at producing/updating documentation (Q2.3). Code reviews. Code review (Q2.4) is adopted by almost the whole set of respondents (95%) and, as shown in Fig. 6, the obvious purposes are detecting bad smells (90%) and finding defects (81%). However, code review is also used a lot to share code knowledge (85%), or to find alternative ways for implementing a feature (75%). These results are partially in line with the observations on the code review process at Microsoft [26] and on open-source projects [27]. At Microsoft, finding defects was the most important motivation, followed by code improvement and finding alternative solutions, while sharing code ownership was only ranked seventh. Analysis of bad code smells. Respondents indicate (Q2.5) that code reviews are the premier way for detecting code smells (92%), while 63% of the respondents also use static analysis tools4. The main problems detected (Q2.6) either by means of automated or manual code review are reported in Fig. 7 (a): the majority indicated as main problems detected unused (78%) or uninitialized (62%) variables, null pointers5 4Due to confidentiality reasons, we cannot disclose the list of tools being used. 5Including null references in languages not directly using pointers, e.g., Java. Unused variables Use of uninitialized variables Null pointers/references Unreachable code Interface misuse Memory leaks % of respondents 0% 25% 50% 75% 100% 24% 33% 61% 62% 62% 78% (a) Q2.6 – Software defects Large (function) size Low cohesion High coupling Lack of encapsulation Other % of respondents 0% 25% 50% 75% 100% 15% 16% 18% 44% 51% 69% 78% (b) Q2.7 – Bad design choices Fig. 7. Problems detected by automated and manual code review. (62%), and unreachable code (61%). In terms of bad design choices (Q2.7) (Fig. 7 (b)) as expected respondents mainly deal with large function size (75%). Surprisingly, they focus more on low cohesion (71%) than high coupling (49%), although in previous studies [28], [29] the latter has been perceived by developers as a negative factor for software maintainability and comprehensibility. The majority of respondents (58%) rejected the common wisdom that poor implementation choices occur because of deadline pressure (Q2.9), confirming previous results obtained in the open source [30]. Interestingly, almost all respondents (88%) annotate these poor implementation choices: hence the principle of self-admitted technical debt – previously investigated in open source [31], [32] – is pretty well applied in ING NL. When time allows, developers try to refactor such smells using some automated tool support: 71% use tools automatically enacting refactoring actions, such as the Eclipse refactoring infrastructure, and not tools recommending refactorings (i.e., tools such as JDeodorant [33]), while 29% do it manually. The latter indicate as main reason for manual refactoring the lack of adequate tools (76%) but also the lack of trust in automated refactoring tools (15%), confirming results studies showing the dangers of using automated tools for applying refactorings [34]. Metric collection. Other than identifying specific defects, developers collect a series of metrics to monitor source code quality (Q2.8). The main metrics used are reported in Fig. 8. Surprisingly, the most important metric is the amount of duplicated code (78%) which traditionally is considered as a kind of bad smell too. Other than that, the cyclomatic complexity (69%, again, indicator of some code smells such as Complex Method) and number of function parameters (51%, indicator of Long Parameter List bad smell). Only 44% of respondents mention LOC. D. RQ3: What are the testing practices adopted within the Continuous Delivery pipeline? Test-Driven Development (TDD) and Testing in general. TDD is the practice of “driving development with tests” [35]. As reported in Fig. 9, 34% of the respondents say they always use TDD (Q3.1). 33% answered they use TDD for certain kinds of (sub) systems, and 12% use it when time pressure allows. 22% do not use TDD at all. Respondents reported to adhere to a TDD style when they can create or have existing unit (96%), integration (53%), acceptance (25%), or performance (15%) tests for the functionality they are about to implement. Reasons for not using TDD are mainly related to TDD not being directly applicable for many types of code changes, e.g., when developing graphical user interfaces (59%), which triggers the need for other kinds of tools, such as capture-replay tools. Another important reason was TDD’s time consuming nature (33%). Regarding testing in general, 47% of the respondents allo- cate between 25% and 49% of their time for testing (Q3.2), and 31% more than 50% of their time. Developers in the WatchDog study [35] estimated to spend on average around 50% of their time on automated, codified testing, very closely resembling the estimates in our study. One may wonder how accurate developers’ self-estimations are and whether developers who claim to use TDD do indeed apply it. Beller et al. [35] found in their WatchDog study that developers spent a quarter of the work time on testing (instead of half, which they originally estimated), and that, even when they reported that they were using TDD, developers practically never applied it strictly [35]. A similar observational study on developers’ testing habits could identify whether and how these findings apply in our given context. Casual evidence from another context (not at ING NL) suggests that, some developers were referring to acceptance testing with the Framework for Integrated Testing (FIT) [36] as TDD, but meant Behavior-Driven Development (BDD) [37]. Generally, our survey answers suggest that quality assurance through testing is a crucial concern at ING NL. A significant amount of manual work is required for TDD in particular and testing in general. Automated tool support, including test case generation, might help further reduce it. When asking a specific question on automation of test generation (Q3.8, Q3.9), 17% of the respondents indicated they use some techniques and tool to automate test case generation. A factor that highlights the cost of testing and that TDD may indeed be followed is the answering to the question of continuous updating of test suites for every change (which is in line with the idea of CI). Most of the respondents claim they almost always (58%) or often (28%) update tests when changing production code (if necessary). Testing strategies and criteria. We found that developers make use of specific testing strategies such as black box testing relatively seldom (Q3.5). 52% of the respondents say they do not use any strategy. As regard black box testing, only 20% and 19% use equivalence class testing and category partitioning [38] criteria respectively. Regarding white box testing, the main criteria being used (Q3.15) are statement coverage (94%), branch coverage (84%), multiple condition coverage (68%), and in some cases path coverage (42%). Most of the respondents picked multiple options indicating that depending on the feature under test, they choose whichever strategy is most suitable. Overall, about statement coverage (Q3.12), 84% of the respondents indicated they try to achieve a coverage level of at least 80%. Other than that, as it is shown in Fig. 10, developers rely on a number of different metrics, mostly the number of failed/passedBlocked test cases (77%) but, for example, also related on how well test cases cover user stories (27%). For unit testing purposes, test cases are often written using (Q3.6) requirements for black box testing (78% of respondents) and source code for white box testing (80%). Only 24% of respondents rely on models. As for integration testing (Q3.7), code is less used (43%) while developers mainly rely on module interfaces (66%). E. RQ4: How is Continuous Integration performed? The first question we asked (Q4.1) was about the use of testing in private builds before opening a pull request. As one can expect, results indicate how the use of CI changes the promotion management policies one may adopt. While in principle [39] one can be tempted to promote code as long as it compiles, with CI developers are encouraged to perform some tests (e.g., unit testing) in the private builds. Indeed, 97% of the respondents indicated they actually do it, while only 3% let the CI perform all tests when builds are performed. In case of build breaking changes (Q4.2), 96% of the developers confirmed that they interrupt their implementation activities and focus on fixing the build. To minimize conflicts, the majority of respondents (62%) create a feature branch and merge it later in the master branch, even if only 22% of them perform a daily merge (Q4.3). Regarding the frequency of pushing changes in the master branch (Q4.4) results indicate that 60% of developers push changes whenever a small piece of a task is completed, while 30% do it only when a whole task is completed. Only few respondents (10%) push changes more than one time in a week. V. Threats to Validity Threats to construct validity concern the relationship between theory and observation. In a survey, such threats may mainly occur because respondents could possibly interpret a question in a different way than it has been conceived, possibly producing misleading results. For example, when answering to Q3.1, and as explained in Section IV-D, it is possible that developers believe they are applying TDD, while this is not the case. Whenever possible, the quantitative findings obtained with the survey were confirmed by the observations made by one of the authors, who observed the ING NL development process for three months. Possibly, the most suitable way of complementing the survey would have been a follow-up live interview or a longitudinal study, which is plan for future work. Threats to internal validity concern with factors that could have influenced our results. One such factor could be the evaluation apprehension [40]. For example, answers to Q2.9 indicated that deadline pressure is not a major cause for poor ![Fig. 10. Q3.18 – Test metrics.](image-url) implementation choices. Another threat is related to the survey return rate. We have shown that the overall return rate is quite high (85%), and generally higher than other surveys conducted in the area of software engineering. Threats to external validity concern the generalization of our findings. The obtained findings are clearly and intendedly confined to the specific case of ING NL, and may or may not generalize to other organizations, even within the same domain. In some cases, e.g., for the use of code reviews, we have shown how our results confirm what seen in other organizations [26]. VI. RELATED WORK In recent years, researchers have conducted different studies on the adoption of CI and CD in industry and open source. Experience reports. Laukkanen et al. [41] interviewed 27 developers at Ericsson R&D to understand their perception of CI. They observed that developers face many technical and social challenges when adopting CI, such as the test infrastructure. An industrial experience report from Kim et al. [42] details a CI setup at the package level, rather than at source code line level, hence increasing the responsibility of package maintainers. Ståhl and Bosh [11] conducted a literature review on CI practices and found that different software development projects use different CI implementations because of several contextual factors such as size, longevity, budget, competences, organizational structure, or geographical distribution. This suggests that contradicting elements in the results of our survey when compared to other studies can possibly be explain by variations in context. Build failures. A challenge in CI is dealing with build failures, which might negatively impact developers’ productivity. Thus, researchers have investigated the most common causes of these failures. For example, Miller [8] at Microsoft reported that, for the Service Factory system, build failures are mainly due to compilation failures, failing tests, static analysis tool issues, and server failures. See et al. [43] at Google found that most failures are due to dependency-related issues between software components. In contrast, Beller et al. [5] analyzed build failures due to test executions. In particular, they found that testing is an important part in CI and it is also the most frequent reason for build failures. Benefits of CI practices. Other researchers have investigated the effect of CI on code quality and developers’ productivity. For example, Miller [8] reported that for the Service Factory system the CI cost was about 40% of the cost of an alternative (non-CI) process achieving the same level of quality. Deshpande and Riehle [44] analyzed commit data from open source projects and found that, differently from industrial development, in open source the adoption of CI has not yet influenced development and integration practices. However, Vasescu et al. [45] mined GitHub projects and found that CI makes teams more productive and improves the likelihood of pull request mergers, without sacrificing the projects’ quality. Tools and techniques. Brandtner et al. [46] focus on improving common CI practices, in particular, they developed a platform that dynamically integrated data from various CI-tools and tailors the information for developers. In other work, Brandtner et al. [47] propose a rule-based approach to automatically profile stakeholders based on their activities in version control systems and issue tracking platforms. platform, namely SQA-Mashup, which dynamically integrates data from various CI-tools and tailors the information for developers. Elbaum et al. [48] presented regression test selection techniques to make continuous integration processes more cost-effective. While the studies described above focused on CD experience itself or introducing new tools and techniques, our survey conducted in ING NL focuses more on the development practices within the CD pipeline, with a particular emphasis on how DevOps engineers manage technical debt and perform testing. VII. CONCLUSIONS This paper reported results of a survey – conducted with 152 developers of a large financial organization (ING Netherlands) – about their use of Continuous Delivery. The survey featured questions about (i) the development process and task management, (ii) managing technical debt, (iii) testing, and (iv) Continuous Integration activities. The main findings of the survey suggest that: - While refactoring is properly scheduled, contrarily to both common wisdom and to Fowler stated [25], it is often performed together with other development activities, as it is considered as part of a user story effort, and this prevents to release poorly maintainable source code. - Respondents tend to “self-admit” technical debt when writing source code, in order to be able to fix it when possible. Instead, they reject the hypothesis that such smells are introduced because of deadline pressure. Then, they use both code reviews and automated tools to identify and refactor code smells. - The majority of developers mention they use TDD, although we do not know whether they are strictly applying TDD. At the same time, quality assurance in the form of (manual) testing requires a significant portion of the allocated time for a sprint. - The use of a Continuous Integration infrastructure encourages developers to test their changes using private builds, and to give very high priority to fix build breakages. In conclusion, our survey-based study shows how practices such as TDD or the identification and refactoring of bad smells (with the help of automated tools) are put in practice in a large organization as ING NL, sometimes confirming common beliefs, sometimes contradicting them. This study requires replications in other organizations, and needs to be complemented with other studies, e.g., case studies, controlled experiments and longitudinal field studies, in which developers’ activities can be closely observed to have a better understanding of their behavior when working within a CD pipeline. ACKNOWLEDGMENTS The authors would like to gratefully thank all the study participants as well as all developers from ING NL that provided precious inputs for the planning of this study. REFERENCES
{"Source-Url": "https://pure.tudelft.nl/portal/files/9159936/vassalloICSME2016.pdf", "len_cl100k_base": 10667, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 36409, "total-output-tokens": 13580, "length": "2e13", "weborganizer": {"__label__adult": 0.00037789344787597656, "__label__art_design": 0.0002551078796386719, "__label__crime_law": 0.00031113624572753906, "__label__education_jobs": 0.0015811920166015625, "__label__entertainment": 4.035234451293945e-05, "__label__fashion_beauty": 0.00015854835510253906, "__label__finance_business": 0.0004010200500488281, "__label__food_dining": 0.00029778480529785156, "__label__games": 0.0004391670227050781, "__label__hardware": 0.0004963874816894531, "__label__health": 0.00036835670471191406, "__label__history": 0.00017273426055908203, "__label__home_hobbies": 7.826089859008789e-05, "__label__industrial": 0.00029754638671875, "__label__literature": 0.0001971721649169922, "__label__politics": 0.0002505779266357422, "__label__religion": 0.0003941059112548828, "__label__science_tech": 0.0022125244140625, "__label__social_life": 9.495019912719728e-05, "__label__software": 0.003566741943359375, "__label__software_dev": 0.9873046875, "__label__sports_fitness": 0.00029206275939941406, "__label__transportation": 0.0003905296325683594, "__label__travel": 0.00018870830535888672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56046, 0.0649]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56046, 0.30142]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56046, 0.92135]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5502, false], [5502, 10409, null], [10409, 16331, null], [16331, 23474, null], [23474, 27762, null], [27762, 32367, null], [32367, 35755, null], [35755, 41296, null], [41296, 47365, null], [47365, 56046, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5502, true], [5502, 10409, null], [10409, 16331, null], [16331, 23474, null], [23474, 27762, null], [27762, 32367, null], [32367, 35755, null], [35755, 41296, null], [41296, 47365, null], [47365, 56046, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56046, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56046, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5502, 2], [5502, 10409, 3], [10409, 16331, 4], [16331, 23474, 5], [23474, 27762, 6], [27762, 32367, 7], [32367, 35755, 8], [35755, 41296, 9], [41296, 47365, 10], [47365, 56046, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56046, 0.23396]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
bfbc3fd990322d5cbf6fcf827da740fa0ac7446d
Computing On Many Cores Bernard Goossens, David Parello, Katarzyna Porada, Djallal Rahmoune To cite this version: HAL Id: lirmm-01302904 https://hal-lirmm.ccsd.cnrs.fr/lirmm-01302904 Submitted on 15 Apr 2016 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Computing On Many Cores Bernard Goossens\textsuperscript{1*}, David Parello\textsuperscript{1}, Katarzyna Porada\textsuperscript{1} and Djallal Rahmoune\textsuperscript{1} \textsuperscript{1} DALI, UPVD, 52 avenue Paul Alduy, 66860 Perpignan Cedex 9 France, LIRMM, CNRS: UMR 5506 - UM2, 161 rue Ada, 34095 Montpellier Cedex 5 France SUMMARY This paper presents a new method to parallelize programs, adapted to manycore processors. The method relies on a parallelizing hardware and a new programming style. A manycore design is presented, built from a highly simplified new core microarchitecture, with no branch predictor, no data memory and a three stage pipeline. Cores are multithreaded, run out-of-order but not speculatively and fork new threads. The new programming style is based on functions and avoids data structures. The hardware creates a concurrent thread at each function call. Loops are replaced by semantically equivalent divide and conquer functions. Instead of computing on data structures, we compute in parallel on scalars, favouring distribution and eliminating inter-thread communications. We illustrate our method on a sum reduction, a matrix multiplication and a sort. C implementations using no array are parallelized. From loop templates, a MapReduce model can be implemented and dynamically deployed by the hardware. We compare our method to \textit{pthread} parallelization, showing that (i) our parallel execution is deterministic, (ii) thread management is cheap, (iii) parallelism is implicit and (iv) functions and loops are parallelized. Implicit parallelism makes parallel code easy to write. Deterministic parallel execution makes parallel code easy to debug. Copyright © 0000 John Wiley & Sons, Ltd. Received … KEY WORDS: Computing model, Manycore processor, Parallelizing core, Deterministic parallelism, Parallel locality, Multithreading, MapReduce 1. INTRODUCTION In the past decade, processors have evolved from single core CPUs to multicore and manycore processors and GPUs. Multicores started with the 2-cores Intel Core-2 introduced in july 2006 and have grown up to the 18-cores Intel Xeon E7-88x0-v3 introduced in may 2015. Nvidia NV1 was the first GPU, with only one core, launched in september 1995. The Nvidia GM200 launched in march 2015 has 3072 cores. Manycores were introduced by the Tilera Tile64 (64-cores) in 2007. Intel followed with the 61-cores Xeon Phi (introduced with a 32-cores in 2010). Kalray proposed in 2014 the 256-cores MPPA which was recently upgraded with the 288-cores MPPA2. These different industrial products reflect three conceptions of parallel programming. The multicore processors (from 2-cores to a few tens) address general purpose sequential applications. The manycore processors (from a few tens to a few hundreds) concern parallel applications. The GPUs (a few thousands) are used for vectorized computations. *Correspondence to: Email: goossens@univ-perp.fr Copyright © 0000 John Wiley & Sons, Ltd. \textit{Prepared using cpeauth.cls} [\textit{Version: 2010/05/13 v3.00}] For a given problem, how can we match its programmed solution to the best suited processor? Should we write a sequential program run on a single thread of a multicore processor? Would it be cost-efficient to parallelize the code and run it on a manycore processor? Or can we find enough regularities in the data to vectorize the solution and implement it on a GPU? A fourth option could be to decompose the problem and have multiple pieces of code running on each of the three types of parallel processors, using an heterogeneous computer [1]. Each of the four possibilities leads to use different programming languages (Java, High Parallel Fortran, Glasgow Parallel Haskell, Cilk, Cuda, Open-CL), libraries (Pthreads, Open-MP, MPI) and combining tools (MPI+OpenMP, Cuda+OpenMP, Cuda+MPI). The final result, i.e. the program solving the initial problem, is likely to be tightly binded to the target computer. Migrating from a multicore processor to a manycore one, a GPU or a hybrid core computer means rewriting most of the code. Adapting programs to new processors is not done through a simple recompilation. Is this situation inherent to the variety of problems or does it come from systems which are not abstract enough to be applicable to the whole range of computational problems? In this paper, we advocate for reconsidering both the hardware and the software, to provide a single frame into which parallel programs can be quickly designed and safely run. To illustrate some aspects of the actual parallel programming choices complexity, we consider the matrix multiplication problem. Figure 1 shows, among a full range, four organizations of the threads computing a matrix multiplication $C[m,p]=A[m,n]*B[n,p]$, in increasing parallelism order. In each subfigure, the red part represents the sources and results of a single thread. Option one uses one thread per matrix $C$ line. Option two has one thread for a subset of one matrix $C$ column. Option three has one thread per matrix $C$ element. Option four has one thread per multiplication. These organizations result in four quite different programs, would they be coded with the pthread library, MPI or openMP (GPUs are not further considered in the paper). This is the consequence of the explicit parallelism, i.e. thread creations, synchronizations and communications. In each version there are different programming difficulties. In the fourth option a result matrix element must be shared by the $n$ threads writing to it. In the second option, the program should pay attention to the false sharing at the junction of two consecutive subsets of a result column, i.e. the size of each thread computation should be adapted to the cache organizations. There is no best choice. It depends on the number of cores, the thread creation and synchronization cost, the communication cost which itself depends on the cores and memory topology. Even worse, a choice can be good today and bad later, with hardware and OS upgrades. The rightmost computation seems to be the least efficient because it involves too small threads and the writers must be synchronized. But this organization captures all the available data parallelism. In this paper, we describe a new parallel programming model aiming to replace actual thread based models. The proposed model relies on a parallelizing hardware briefly presented in section 2 and on a new programming style developed in section 3. The proposed hardware is based on a simplified out-of-order core. Our processor is made of many small cores using registers rather than memory, as a GPU, but it is MIMD rather than SIMD. The programming model relies on a standard programming language like C with a gcc-like compiler suite. The parallelization, instead of being static (compiler parallelizing directives) or semi-static (compiler directives + OS primitives), is fully dynamic, i.e. parallelism is implicit in a code written with a standard sequential language. The syntactical order of the instructions in the code fixes the deterministic order of the computation, i.e. the sequential semantic. The program is... run in parallel thanks to the parallelizing hardware. The parallel run is reproducible because the parallel semantic is equivalent to the sequential semantic. A program written in a high level language like C can be interpreted as a parallel program when the function call instruction semantic is slightly changed, assuming it forks. A resume thread is created in parallel with the calling one. The core hardware implements the forking semantic of the call instruction. Each core is multithreaded (like Intel Hyperthreading [2]) and communicates with its neighbours through a bidirectional ring. At function call, a free thread slot is allocated on the successor core to host the resume thread which is fetched in parallel with the main thread. Parallelism is implicitly deployed and managed by the hardware. Communications and synchronisations are implicitly derived from producers to consumers dependences. A reader is matched with and synchronized to its unique writer by hardware renaming [3]. The model has many advantages among which: - a run is parallel and deterministic, - parallelism is implicit, - loops are parallelized, - no OS overhead, - no hypothesis on the number of available resources, - easy debugging of the parallelized code. Hardware parallelization is compared to OS parallelization in section 4. Section 5 places our proposition in the context of actual parallelizing methods and tools and concludes. 2. A PARALLELIZING HARDWARE Figure 2 shows the manycore processor design. The left part of the figure is the general topology of a 32-core processor and the right part is the inside of a core. The cores are linked by a bidirectional ring (magenta color). A core communicates with its successor and predecessor: the send unit in green is linked to the predecessor and to the successor receive units in red. The processor has a set of shared L2 caches which hold code and I/O data (the L2 access buses are in cyan color). Each core hosts a set of threads (e.g. 16 threads per core). A thread is represented by its PC and its renaming table (RT). The core pipeline has three stages (right part of the figure). The fetch stage selects a ready thread PC to fetch one instruction from IL1. The instruction is saved in the thread instruction buffer (IB). The rename stage selects a full IB and the instruction it holds is decoded and renamed through the thread RT. The renamed instruction is saved in the instruction table (IT). The compute stage selects one ready instruction in IT which is executed: it reads its sources from and writes its result to the Renaming Registers (RR). At full speed, the processor runs one instruction per cycle (IPC) per core (e.g. 1K IPC for a 1K-core processor). The Instruction Set Architecture (ISA) is restricted to register-register, control and I/O instructions. There are no memory-access instructions. COMPUTING ON MANY CORES ```c int sum(int i, int n) { if (n==1) return f(i); if (n==2) return f(i)+f(i+1); return sum(i, n/2) + sum(i+n/2, n-n/2); } void main() { printf("s=%d\n", sum(0, 10)); } inline int f(int i) { return i; } ``` Figure 3. A vector sum reduction programmed in C ```c sum: cmpq $2, %rsi /* if (n>2) */ ja .L2 /* goto .L2 */ movq %rdi, %rax /* rax = f(i) */ subq $1, %rsi /* if (n=1) */ je .L1 /* goto .L1 */ addq $1, %rdi /* rdi = f(i+1) */ addq %rdi, %rax /* rax = f(i)+f(i+1) */ .L1: ret /* stop */ .L2: movq %rsi, %rbx /* rbx = n */ shrq %rsi /* rsi = n/2 */ fork $3 /* start thread */ push %rdi /* send rdi */ push %rsi /* send rsi */ push %rbx /* send rbx */ call sum /* rax = sum(i,n/2) */ pop %rbx /* receive rbx */ pop %rsi /* receive rsi */ pop %rdi /* receive rdi */ movq %rax, %rcx /* rcx = rax */ addq %rsi, %rdi /* rdi = i + n/2 */ subq %rsi, %rbx /* rbx = n - n/2 */ movq %rbx, %rsi /* n = n - n/2 */ fork $4 /* start thread */ push %rcx /* send rcx */ call sum /* rax = sum(i+n/2,n-n/2) */ pop %rcx /* receive rcx */ addq %rcx, %rax /* rax += sum(i,n/2) */ ret /* stop */ ``` Figure 4. A vector sum reduction translated into x86 Each core has two special units to send and receive messages to and from its neighbours. A message is sent to the prior or next thread, hosted by a neighbour core (mostly, the successor). A message contains a register. A new thread Program Counter (PC) value is sent to the successor core when a call instruction is decoded. A register source \( r \) which is not locally set is imported from the core hosting the prior thread. Figure 3 is a vector sum reduction programmed in C and figure 4 is its translation in x86. Function \( f(i) \) returns vector element \( i \). The code does not implement the vector as an array but as a function returning any of its elements (function \( f \) could, instead of providing the value itself, get it from an input file; the OS I/O driver should allow parallel I/O, like in MPI2 [4]). The x86 translation does not use any memory access. The computation is done within the set of architectural registers. The \texttt{fork} \( k \) instruction creates a new thread on the successor core, starting from the resume address after the next \texttt{call} instruction. In between are \( k \) \texttt{push} \( r \) instructions. The \texttt{push} \( r \) instruction sends a copy of register \( r \) to the new thread. The creating thread sends \( k \) values to the created thread. For example, the \texttt{fork} \( 3 \) on line 11 starts a remote thread. The next three \texttt{push} instructions send copies of registers \( rdi, rsi \) and \( rbx \) to the created thread. The \texttt{call} instruction sends the resume code address, i.e. a copy of the return PC. Once the core hosting the new thread has received the 3 registers and the PC value, it starts fetching. The thread running the `call` on line 15 jumps to the label target, i.e. to line 1 and the created thread fetches from line 16 in parallel. The compiler inserts a `pop r` instruction to receive register `r` in the resume thread. For example, the resume thread started at line 16 receives registers `rbx`, `rsi` and `rdi` sent by the main thread. The send/receive machine instructions names are `push/pop` which can seem confusing. There is no stack involved if the run is parallelized, only a value transmission (sent by `push` and received by `pop`). A `push` instruction is turned into a `send` operation, which waits for the pushed register value and then sends it to the destination. A `pop` instruction is turned into a `receive` operation, which waits for the transmitted value and then writes it to the register destination. Push and pop instructions may be run out-of-order. The reception order is irrelevant. The presence of the `push/pop` instructions allows the hardware to switch between parallel and sequential modes. The `fork` instruction, which creates a remote thread, is blocking until a free slot has been allocated. If the instruction is run by the oldest thread and if no slot is free in the selected core, a fail message is immediately sent back and the creating thread switches to sequential mode. The `call`, `ret` and `push/pop` instructions regain their original stack related semantic. Each thread slot has a fixed size private stack, which expands in L2. The thread switches back to parallel mode when a `ret` instruction empties the stack. The oldest thread special behaviour guarantees that it may not be blocked forever, ensuring a deadlock free parallel execution. When run in parallel mode, a `ret` instruction stops its thread. In parallel mode, the `call` and `ret` instructions do not save/restore the return address on the stack. Figure 5 shows the parallelization of 11 threads summing a 10 integer vector. Each thread is surrounded by a red rectangle. For example, thread 1 runs 26 instructions (9+9+8) on core 0. The `ret` instruction on line 8 ends the thread. When the `fork $3` instruction on line 11 is run, a new thread is started (thread 6, on successor core 1). When the `call` instruction on line 15 is run, the return address (line 16) is sent to the created thread as its starting fetch address. Thread 1 starts successively threads 6 and 2, both at line 16. Every thread is linked by the hardware to its predecessor and successor (red lines on the figure). When the successor links are followed, the sequential trace is built. Threads have a hierarchical level, depicted in red figures on the top right corner of the threads surrounding rectangles. The higher the level, the lower its figure. For example, threads 1, 6 and 11 form highest level 1. The eleven threads are deployed in seven successive steps in the manycore thread slots. As soon as a thread has executed all its instructions, it frees its hosting slot. In the example given on figure 5, threads 3 and 8 are first freed, then 4 and 9, eliminating level 3. The oldest thread (i.e. thread 1) is freed in parallel with 3 and 8. A thread can be freed if it is the oldest or if its predecessor has a higher level and its successor has a higher or equal level. During the run the tree of threads expands on the right and the bottom (new threads) and contracts on the left and the bottom (ended threads). If a core gets saturated with threads, the expansion stops there while no free slot is available but the contraction continues, freeing slots. Concerning synchronizations and communications, the hardware matches a reader with its writer through register renaming. The written value is copied to the reading source. For example on figures 4 and 5, instruction 7 in thread 1 writes the sum \( f(0)+f(1) \) into register \( \text{rax} \) (lower blue box on figure 5). Instruction 19 \( \text{rax} \) source in thread 2 (upper blue box) is matched with instruction 7 \( \text{rax} \) destination in thread 1, as it is the first \( \text{rax} \) writer met on the backward travel along the ordered threads, starting from thread 2. In the same manner, instruction 27 \( \text{rax} \) source in thread 11 (upper light blue box) is matched with instruction 27 \( \text{rax} \) destination in thread 10 (lower light blue box). In a different way, instruction 26 \( \text{rcx} \) popped in thread 11 (rightmost green box) is matched with instruction 24 \( \text{rcx} \) pushed in thread 6 (leftmost green box). In this case, the \( \text{rcx} \) value is directly sent from thread 6 (core 1) to thread 11 (core 2) when computed. All these communications concern neighbour cores. 3. A NEW PROGRAMMING STYLE To take advantage of the parallelizing hardware, the programmer should adopt a new programming style, including the following requirements: - decompose the code into functions, - translate \( \text{for} \) and \( \text{while} \) loops into divide-and-conquer template functions, - avoid separating inputs and outputs from computations, - recompute rather than store and load, - avoid data structures, only use scalars: no memory, no pointer, no array, no structure. This is illustrated by a parallelized matrix multiplication and a parallelized sort. The programming style we describe looks like the functional programming paradigm [5] [6] [7]. The program examples are written in C and most of them would be more elegant in Haskell. We have chosen C and x86 to keep close to the hardware. The programming style we use is in some way more restrictive than the functional paradigm. A program is a composition of functions which only compute scalars, rather than lists like in Lisp. We want the computation to be optimally distributed, which requires to compute on scalars instead of data structures. The programming style we propose is close to lambda-calculus [8], with expansion, i.e. substitution, and reduction. But it may allow a restricted form of assignment, to save to registers and later reuse intermediate computed scalars. The general programming pattern is to organize a parallel computation from each result to output and backward up to the inputs it uses. Each result is produced by an independent computation, using its own copies of the inputs. Each computation is a succession of transformations from the used inputs to the target output, with no intermediary structured storing. If a set of scalar results are to be used more than once, each scalar may either be individually recomputed or register saved and later restored. But the structured set of data should not be built in memory and loaded because storing a structure in a manycore centralizes its data. In a manycore, it is slower to gather a data structure, keep it in memory and later scatter its components than to compute and use elements separately. It is also probably less energetically efficient because of the energetical cost of memorization and communication. If the program should compute on structured data, the data should only be structured externally from the computation. A data structure is input in parallel and its scalar components are distributed to the set of parallel read instructions. The scalar elements composing an output data structure are written in parallel. The parallel computation itself does not manipulate any data structure. The hardware is composed of some external memory holding the code to be run and the structured input and output data. This external memory has a high throughput to allow for parallel I/O requests from the processor. The processor is the one described in section 2, using private and shared caches to shorten the external memory access latency. Input machine instructions move the data from the external input to the processor, where the results are computed, which are sent back to the external memory with output machine instructions. The memory hierarchy is naturally coherent as output destinations are written only once. The sum reduction program given on figure 3 is a composition of scalar sums. There is no vector in the computation. The scalars to be summed are input when needed, i.e. when the sum function reaches a recursion stop condition to sum up two adjacent values. They are given by an initializing function \( f \) returning the vector value for index \( i \). The final sum is sent to output. The sum reduction does not use any storing memory. Intermediate sums are computed in allocated renamed registers, which are freed with their computing thread. Each core contains 128 renaming registers (RR file on figure 2, right), shared by its 16 thread slots. A 1K core processor has 128K registers, making data memory unnecessary to hold intermediate results. As the programming style avoids memory storing and loading, communications are reduced to the sent/received registers and the result transmission from a main thread to a resume one. Because a complex computation is decomposed into scalar computations, the run is fully distributed. The thread creation being done by the hardware, there is no overhead and parallelization can be fine grain. Eventually, thread ordering and renaming ensures a deterministic computation because the partial order run preserves the producer to consumer dependences of the sequential order. A consequence of the implicit parallelism is that the parallel code can be tested on a sequential machine. The deterministic execution makes debugging as easy as for a sequential run. Bugs are reproducible. As an illustration, all the C codes in this paper can be compiled with a standard gcc compiler and their run on a sequential processor produce the same results in the same order as the parallel run would on a parallelizing hardware, thanks to determinism. ```c for_loop(int i, int n, void (*body)()), void *arg) { if (n==1) {body(i, arg); return;} if (n==2) {body(i, arg); body(i+1, arg); return;} for_loop(i, n/2, body, arg); for_loop(i+n/2, n-n/2, body, arg); } ``` Figure 6. For loop template function The time complexity of algorithms and programs in the proposed parallel execution model is not measured in terms of operations but in terms of depending threads. For example, the sum reduction complexity is \( O(\log n) \), which means that \( \log n \) threads launching steps are needed to deploy the code on a hardware having enough available thread slots. 3.1. Parallelizing loops To be parallelized by our hardware, a loop must be written as a function. 3.1.1. For loops. Figure 6 shows a for loop template function. Each loop parallelized this way has a \( O(\log n) \) complexity to deploy the threads running the \( n \) iterations. An iteration excluding condition is added as an argument on figure 8 right template. Figure 7 shows the threads created by a 10 iterations parallelized for loop. The figure 6 template function uses pointer arg to encapsulate the body function arguments. This way to transmit an unknown number of values was chosen to keep the code close to the pthread usage. As the parallelizing hardware we propose does not have any memory access instruction in the ISA, pointers and structures are not available. The encapsulated arguments should be interpreted as a list of scalar values rather than a C structure, as a va_list type provided by stdarg.h. ```c //i=lower; //while (!cond(i, arg, cond)) { // body(i, arg, body); i++; //} //to parallelize the loop //replace it by a call to //while_loop(lower, 1, // cond, arg, cond, body, arg, body); //returns the number of //iterations in the while //launches n, 2*n, 4*n ... //iterations until cond int while_loop(int i, int n, int (*cond)(), void *arg, cond, void (*body)(), void *arg, body) { int nb_iter = for cond(i, n, cond, arg, cond, body, arg, body); if (nb_iter == n) nb_iter += while_loop(i + n, 2*n, cond, arg, cond, body, arg, body); return nb_iter; } //returns the number of iterations //not excluded by the cond function int for_cond(int i, int n, int (*cond)(), void *arg, cond, void (*body)(), void *arg, body) { int c1, c2; if (n == 1){ c1 = cond(i, arg, cond); if (!c1) body(i, arg, body); return !c1; } if (n == 2){ c1 = cond(i, arg, cond); c2 = cond(i + 1, arg, cond); if (!c1) body(i + 1, arg, body); return (!c1) + (!c2); } c1 = for_cond(i, n/2, cond, arg, cond, body, arg, body); c2 = for_cond(i + n/2, n - n/2, cond, arg, cond, body, arg, body); return c1 + c2; } ``` Figure 8. While loop and conditional for loop template functions 3.1.2. While loops. The left part of figure 8 shows the while loop template function. The for_cond function shown on the right part of the figure is a for loop with an exclusion condition cond. It returns the number of non excluded iterations. The while loop runs n iterations in a for_cond loop. If no iteration is excluded by the for_cond, the while_loop function is called recursively to run 2 * n more iterations. It runs 1 iteration, then 2, 4, 8 ... until the for_cond reaches the cond condition. It returns the number of iterations in the while loop. Each while loop parallelized this way also has a O(log n) complexity to deploy the threads running the n iterations. Figure 9 shows two applications, the left one using figure 8 while_loop pattern. On the left of the figure is the parallelized computation of my_strlen. The while_loop function runs 1+2+4+8 iterations. The for_cond called for 8 iterations returns 4, which stops the while_loop recursion. The computed string length is 1+2+4+4=11. (It may seem tedious to write a function like s(i) for each declaration of an initialized array. The compiler can be adapted to translate such declarations into functions). On the right of figure 9 is a parallelized computation of a search of a character in a string. The my_strchr function uses the for_reduce pattern defined on figure 11. The parallelization of a while loop launches iterations after the exit condition. In the while loop pattern, it is assumed that when cond is true, it remains true for the following iterations, which are all excluded by the for_cond loop. It works for the my_strlen function example (left of figure 9) because accessing beyond the string end returns \"\0.\" It does not work for the my_strchr function (right part of the figure). In a search, iterations after the searched element are parasitic and should be explicitly excluded. The loop is parallelized by the for_reduce function, which is a for loop computing a reduction. The length l of the searched string s is computed and the searched character is compared in parallel to each character of s. The reduction returns the leftmost match index. If the searched character is not found, the reduction returns l + 1. As the for_loop, the while_loop, the for_cond and the for_reduce template functions use pointer arguments to be interpreted as lists of scalars, transmitted from the caller to the callee through registers. Figure 10 shows the threads created by the my_strlen("hello world\0") execution. 3.1.3. Reduce for loops. Figure 11 shows the template of a reduction for loop and a revisited version of the vector sum reduction. Any similar template can be written to provide a while_reduce and all the necessary tools to build a MapReduce program [9]. The get and reduce functions in for_reduce and cond and body functions in for_loop, for_cond and while_loop should be carefully programmed to avoid any serialization of the run, as we have done in the given examples. It would be a design error to increment a counter in the my_body function of the my_strlen example to compute the length. The hardware would run the loop correctly but serially because of the recurrence in the iteration body. ```c // rnv is the reduction neutral value int for_reduce(int i, int n, int rnv, int (*reduce)(), void *arg_get, int (*reduce)(), void *arg_reduce){ if (n==1) return reduce(i, n, get(i, arg_get), rnv, arg_reduce); if (n==2){ return reduce(i, n, get(i, arg_get), get(i+1, arg_get), reduce, arg_reduce); } return reduce(i, n/2, rnv, get, arg_get, reduce, arg_reduce), for_reduce(i+n/2, n-n/2, rnv, get, arg_get, reduce, arg_reduce), arg_reduce); } ``` **Figure 11.** A reduce for loop template and a revisited version of sum ```c int a[2][3]= {{1,2,3},{0,1,2}}, b[3][4]= {{2,3,4,5},{3,2,1,0}, {0,1,2,3}}, c[2][4]; void print_mat(int *a, int m, int n){ int i,j; for (i=0; i<m; i++){ for (j=0; j<n; j++) printf("%d,\n",(a+i*n+j)); printf("\n"); } } void imatmul(int m, int n, int p){ // c[m][n] = a[m][p] * b[p][n] int i, j; for (i=0; i<m; i++) for (j=0; j<n; j++) *(c+i*n+j) = sum(0,p,i,j,n,p); } ``` ```c int sum(int fk, int nk, int i, int j, int n, int p){ if (nk==1) return *((int*)a+i*p+fk) * *((int*)b+fk*n+j); if (nk==2){ return *((int*)a+i*p+fk) * *((int*)b+fk+n+j) + *((int*)a+i*p+fk+1) * *((int*)b+fk+1+n+j)); } return sum(fk,nk/2,i,j,n,p) + sum(fk+nk/2,nk-nk/2,i,j,n,p); } ``` **Figure 12.** A matrix multiplication programmed in C 3.2. A Parallelized Matrix Multiplication 3.2.1. The classical matrix multiplication program. Figure 12 shows the C code of a matrix multiplication. The classical matrix multiplication algorithm is a good illustration of how the parallelizing hardware can parallelize nested for loops. If we use the program on figure 12 as a canvas for a parallelized implementation, the parallel program will probably be organized in three different phases. The first phase sets the inputs, the second phase computes the product and the third phase outputs it. The first phase appears in the sequential code as matrix a and b initializations. This job is done by the OS at process start, by copying the data segment from the ELF file into the memory. The third phase sends each element of the product matrix to the OS output buffer (logical output driver called by printf) and from there, the buffer content is sent to the physical driver (either a display or a file). If the OS is not parallelized (e.g. Unix), I/O are sequential, which sequentializes phases one and three. Otherwise, the three phases may all be parallelized. To avoid serializations, input and computation may be fused (i.e. start some computation product as soon as their input data are set) as well as computation and output (i.e. output one element as soon as it is computed). Input matrix elements are copied from file to memory in parallel. In parallel with the inputs, products are computed with the available data from memory. In parallel with the products computations, the computed sums of products are copied to the output file. Even though we parallelize this much, we have a big communication problem. The input matrices are centralized in the core which runs the loader _start function. Each element is consumed by many products, which can be distributed on many cores, requiring many communications from the owner cache to the consumers ones. The same communication problem applies between the product producers and the vector sum consumers, with the aggravating difficulty of coherent cache updates. The parallelizing processor we have described in section 2 runs the four function calls in main in parallel (figure 12, bottom right part). It parallelizes the input matrix printing, the product matrix computation and its printing. However, it does not parallelize the for loops as iterations are not functions. The classical matrix multiplication program is not suited to our parallelizing hardware. 3.2.2. Parallelizing the matrix printing and the vector product. Figures 13 and 14 show functions get_a and get_b to read one element of matrices a and b. Instead of reading the input values from the data memory, the threads read them from the code memory (or from a file), which can be duplicated and cached in all the requesting cores. Figure 13. Function to get matrix a elements ```c int get_a(int i, int j) { switch(j){ case 0: if (i==0) return 1; else return 0; case 1: if (i==0) return 2; else return 1; case 2: if (i==0) return 3; else return 2; } } ``` Figure 14. Functions to get matrix b elements and to print a matrix ```c int get_b(int i, int j){ switch(i){ case 0: switch(j){ case 0: return 2; case 1: return 3; case 2: return 4; case 3: return 5; } case 1: switch(j){ case 0: return 3; case 1: return 2; case 2: return 1; case 3: return 0; } case 2: switch(j){ case 0: return 0; case 1: return 1; case 2: return 2; case 3: return 3; } } } ``` ```c typedef struct {int fj; int nj; int n; int p; void (*body_j)(); int (*get_m)();} Arg,i; typedef struct {int i; int n; int p; int (*get_m)();} Arg,j; void body_pr_j(int j, void *arg){ Arg.j +a=(Arg.j *)arg; printf("%d",a->get_m(a->i,j)); if (j==a->p-1) printf("\n"); } void body_pr_i(int i, void *arg){ Arg.i +ai=(Arg.i *)arg; Arg.j aj; aj.j=i; aj.n=ai->n; aj.p=ai->p; aj.get_m=ai->get_m; for_loop(ai->fj, ai->ni, ai->body_j,(void *)&aj); } ``` The right part of figure 14 shows the two nested loops to print a matrix. It illustrates the way the for loop template function implements nested loops. 3.2.3. Parallelizing the matrix multiplication. On figure 15 in the \texttt{body\_mm} function, the computed sum \( s \) should be stored into an array element saving \( C[i, j] \). Element \( C[i, j] \) is consumed by the printing function rather than stored. The threads sequential ordering ensures that even though the \( C \) values are computed out-of-order, they are output in order. It is easy to check that when the code is sequentially run, matrix \( C \) is properly printed. As the parallel run preserves sequential dependences, the computed \( C \) values can be written in order by an ad hoc OS output driver in the video memory. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure15.pdf} \caption{Computing the matrix product and \texttt{main} function} \end{figure} The run is fully distributed to capture all the available data parallelism. \begin{figure}[h] \centering \includegraphics[width=\textwidth]{figure16.pdf} \caption{A binary tree sort programmed in C} \end{figure} To summarize, the parallelized matrix multiplication \( C[m, p] = A[m, n] \times B[n, p] \) is a set of \( m \times n \times p \) threads, each computing a single product. \( C[i, j] \) is a sum reduction of the \( k \) products \( A[i, k] \times B[k, j] \). The input matrices are not organized as arrays. Their elements are given by two access functions \texttt{get\_a} and \texttt{get\_b} returning \( A[i, j] \) and \( B[i, j] \) for indexes \( i \) and \( j \). The resulting matrix is neither stored as an array. Each element is sent to output as soon as computed. The complexity is $O((\log m) \times (\log n) \times (\log p))$, requiring $(\log m) \times (\log p)$ steps to deploy all the threads to compute the $m \times p$ elements of matrix $C$ and from there, $\log n$ steps to sum $n$ products by a reduction. ### 3.3. A Parallelized Sort Figure 16 shows a binary tree sort programmed in C. This is not the best sorting algorithm to be parallelized but it emphasizes the potential communications problems. The process running the program sets the initial unsorted vector in the data region (OS copy from the ELF file). The for loop inserts each element of the vector into a binary tree. The travel function copies each element of the tree into the output buffer. Concurrent updates of a tree require complex mutual exclusion, especially if we want to interleave the tree construction, its travelling and its destruction to fully parallelize the sort. Even if a satisfying parallel code is built, its run requires three copies of each element (from ELF file to data region, then to tree and from tree to output buffer). These copies involve communications which might be long distance in a manycore processor. Binary tree writes imply complex memory coherence control. Figure 17 shows the main and print_body functions of a parallel sort with no array. The input data are given by function my_random, delivering a precomputed random value. The random value precomputation avoids the pseudo-random suite dependences. The main function prints the initial set of unsorted values and in parallel prints the sorted set. ```c #define SIZE 10 int my_random(int i){ switch(i){ case 0: return 3; case 1: return 0; case 2: return 1; case 3: return 5; case 4: return 9; case 5: return 7; case 6: return 6; case 7: return 3; case 8: return 4; case 9: return 1; } } typedef struct {int i; int v;} int get(int i, void *arg){ Arg =a=(Arg *)arg; return (a->f(i)<a->f(a->v)); } int get_next(int i, void *arg){ Arg =a=(Arg *)arg; return (a->f(i)<=a->f(a->v)); } int for_pos(int i, int n, int v, int (*f)()){ Arg a; a.f=f; a.v=v; return for_reduce(i,n,0, get,(void *)&a,sum,NULL); } int for_pos_next(int i, int n, int v, int (*f)()){ Arg a; a.f=f; a.v=v; return for_reduce(i,n,0, get_next,(void *)&a,sum,NULL); } void print_body(int i, void *arg){ printf("%d",my_random(i)); } void main(){ for_loop(0,SIZE,print_body,NULL); printf("\n"); for_loop(0,SIZE,sort_body,NULL); printf("\n"); } ``` Figure 17. A parallel sort programmed in C: main, initial element print and my_random input functions ```c int sum(int i, int j, int a, int b, void *arg){ return a+b; } void element_body(int i, void *arg){ Arg *a=(Arg *)arg; int p, pn, v=a->f(i); p=pos(0,SIZE,i,a->f); pn=pos_next(0,SIZE,i,a->f); if (p<a->i & a->i<pn) a->v=v; } int elemt_at_pos(int i){ Arg a; a.f=my_random; a.i=1; for_loop(0,SIZE,element_body, (void *)&a); return a.v; } void sort_body(int i, void *arg){ printf("%d",element_at_pos(i)); } ``` Figure 18. The parallel sort The initial set print launches 11 threads, among which 8 read one or two elements from the my_random function, i.e. copy a value from the code to the output buffer. Figure 18 shows the sort. Function element_at_pos(i) returns element at position i in the sorted set. Function for_pos(..., v) returns the position of the first occurrence of v in the sorted set. Function for_pos_next(..., v) returns the position of the first element next to v in the sorted set. It implements a very poor sequential sorting algorithm, requiring O(n^3) comparisons. For each element e we recompute n times the number of elements less than e instead of building an array once and use it. In a parallel processor, such recomputations are faster than storing and loading a data structure. The second for_loop in main launches 11 threads, among which 8 compute one or two elements at their position in the sorted set. The parallel threads are ordered when they are dynamically created, which sets the prints order. Each computed element in the output only uses the initial values, given by calls to my_random. Element computations are all fully independent. Each is computed from a tree of threads having only depth dependences, thanks to the for reductions avoiding recurrences in the loop bodies. To summarize, sorting a set of scalars computes in parallel all the elements of the sorted output. The duplicated elements at ranks i to j (i ≤ j) are the ones in the input set which have i lower and j lower or equal elements. The sorting function does not use any array, any storage to permute. The complexity is O((log n)^3), i.e. all the threads are deployed after (log n)^3 steps (to be compared to the O(n * log n) complexity of a sequential sorting algorithm). 4. COMPARING OS PARALLELIZATION TO HARDWARE PARALLELIZATION Figure 19 shows a pthread implementation of the sum reduction. This code is compared to the one on figure 3. A first difference is that the pthread version explicits the parallelization through the calls to pthread_create, the synchronization through the calls to pthread_join and the communications through the arguments transmission at thread creation and the result transmission at thread exit and join. These explicit calls obscure the code: the pthread version is four times longer than the figure 3 version parallelized by our proposed hardware. ```c typedef struct{int *v; int n;} ST; void *sum(void *st){ ST sl, s2; long *s, *sr; pthread_t t1, t2; s =malloc(sizeof(long)); sl=malloc(sizeof(long)); sr=malloc(sizeof(long)); if (((ST *)st)->n>0){ s1.v=((ST *)st)->v; s1.n=(((ST *)st)->n)/2; pthread_create(&t1,NULL, sum,(void *)&sl); s2.v=(((ST *)st)->v+((ST *)st)->n/2); s2.n=(((ST *)st)->n/2); pthread_create(&t2,NULL, sum,(void *)&s2); pthread_join(t1,(void **)&sl); pthread_join(t2,(void **)&sr); } else if (((ST *)st)->n==1){ *s=sl=((ST *)st)->v[0]; *sr=sl=((ST *)st)->v[1]; *s+=*sr; } pthread_exit((void *)s); } ``` The pthread run creates 12 threads, as does the hardware parallelization (Figure 5 shows 11 threads for the parallelization of sum, to which one printing thread is added). ### 4.1. The compared architectural cost of parallelization A second difference is that in the `pthread` run, the calls to `pthread_create`, `pthread_join` and `pthread_exit` add a high overhead. The number of x86 instructions run by these calls can be measured using `pin` [10]. Table I shows the overhead in the run of the `sum` code in figure 19. The measure was done on a Intel Core i7-4900MQ operated by Ubuntu 14.04. The `pthread` code is compiled with `gcc 4.8.4-2`, `−O3` and `−static` options and `libpthread-stubs0-dev0.3-4`. <table> <thead> <tr> <th>x86 instructions run by</th> <th>create</th> <th>join</th> <th>exit</th> </tr> </thead> <tbody> <tr> <td><code>pthread</code> parallelization</td> <td>727-736</td> <td>136-755</td> <td>10821-10987</td> </tr> <tr> <td><code>hardware</code> parallelization</td> <td>3-5</td> <td>0</td> <td>0</td> </tr> </tbody> </table> Table I. Number of x86 instructions run to create, join and exit threads The `pthread_create` primitive ran 727 (call in `main`) or 736 (calls in `sum`) x86 instructions. The `pthread_join` primitive ran 755 (call in `main`), from 143 to 736 (first call in `sum`) and from 136 to 755 (second call in `sum`) instructions. The `pthread_exit` primitive ran from 10821 to 10987 instructions (it is not clear what `pin` exactly measures in `pthread_exit`: 10K instructions run seems a lot; the measures for `pthread_create` and `pthread_join` have been confirmed in a second experience using `gdb`, which was not possible for `pthreadExit`). The last line of the table gives in contrast the very low number of x86 instructions run when the parallelization is done by hardware. The hardware parallelization creates threads and sends registers initializations messages. In the `sum` run (see figure 4), the first `sum` call runs 5 instructions to fork, copy registers `rdi`, `rsi` and `rbx` and call. The second call runs 3 instructions to fork, copy register `rcx` and call. The cost to synchronize threads is null because they are ordered and hardware register renaming offers a free natural synchronization between any reader and its unique writer. The OS overhead condemns OS-based parallelization to coarse grain. To amortize 1.5K instruction run (or 12K if the `pthread_exit` cost is included), each thread should at least sum up a thousand values (resp. 12K). Parallelizing hardware makes fine grain parallelization possible: one thread per pair of values. ### 4.2. The compared microarchitectural cost of parallelization ![Diagram](https://via.placeholder.com/150) Figure 20. Data initializations for the `sum` execution The number of instructions run is one part of the cost of parallelization. A second part is the data movement from source input to result output, i.e. a microarchitectural cost. In the pthread program, the input vector is initialized from the ELF file by the OS _start function (which later calls main). The data are centralized in the L1 data cache of the core running the _start function. This is illustrated on figure 20. The left figure is the _start function architectural load request and the right figure is the memory hierarchy microarchitectural fill request. Each leaf sum thread reads one or two elements of summed vector v. These accesses trigger communications between the v elements central location and each requesting core, as illustrated on figure 21. This is hardware driven by copying cache lines from L3 to L1. Bus contention may serialize the requests, i.e. the threads. Each copy caches a full 64 bytes line when one or two elements only are useful, uselessly transferring from 56 to 60 bytes in each communication. Caches are not adequate devices for parallel executions. The principle of spatial locality enforces caches to keep large lines of data, i.e. centralizing them. This is in conflict with data distribution. The principle of temporal locality enforces caches to keep data for multiple successive local accesses. This is in conflict with data recomputation which avoids storing. Figure 22 shows the data movements when running the code in figure 4 in parallel on a parallelizing hardware. Each leaf sum thread computes its partial sum from values encoded in the fetched instructions. The computing code is read from instruction cache iL1 whereas in the pthread run, data are read from data cache dL1 which holds vector v. The communications are reduced to the minimum, i.e. the values needed by the computation migrate from the code file to the cores. 4.3. Comparing a data structure based parallelization to a function based one Figure 23 shows a pthread version of quicksort. It is compared to figures 17 and 18 sort program. ```c void quicksort(void *sa) pthread_t tid1, tid2; sub_array_t sal, sar; int ip, p, t, i1, i2, f, l; f=((sub_array_t *)sa)->f; l=((sub_array_t *)sa)->l; if (f < l){ ip=f; i1=f; i2=l; p=a[ip]; while (1){ while (i1<i & a[i1] <= p) i1 ++; while (a[i2] > p) i2--; if (i1<i2){t=a[i1]; a[i1]=a[i2]; a[i2]=t;} else break; } a[ip]=a[i2]; a[i2]=p; sal.f=f; sal.l=i2-1; pthread_create(&tid1, NULL, quicksort,(void *)&sal); sar.f=i2+1; sar.l=i1; pthread_create(&tid2, NULL, quicksort,(void *)&sar); pthread_join(tid1, NULL); pthread_join(tid2, NULL); } ``` Figure 23. A pthread parallelization of quicksort Figure 24, 25 and 26 show how the data travel in the caches when the pthread quicksort function is run. The _start function copies the initialized vector from the ELF file into the core 0 memory hierarchy. The loop to print the initial vector belongs to the same thread as the _start function. It is run on core 0 and it accesses to the vector elements in cache L1. The main thread creates a first quicksort thread run on core 1. It gets the vector from L3 in its L1 to partition it (figure 24 left part: cache miss propagates; figure 24 right part: cache hierarchy load). The partitioning while(1) loop updates the vector copy in the core 1 L1 cache (figure 25 left part). The quicksort thread creates two new quicksort threads, each to sort a half vector. The left half sorting thread is run on core 2 and the right half sorting thread is run on core 3. Both threads read their vector half from the core 1 L1, which has the only updated copy of the vector (figure 25 right part). Both requests have to be serialized. On the left part of figure 26, we assume core 2 gets access first. The right part is core 3 access. Figure 24. Data movements during the quicksort execution parallelized by pthread The data travel from the partitioning core to the partitions sorting ones. Each level of the quicksort binary tree moves the full vector. There are $n \cdot \log n$ data movements from L1 to L1 through L2 and L3, with no locality benefit. All these movements are avoided in the sort program on figures 17 and 18. As the values to be sorted are held in the code, i.e. encoded in the machine instructions translating function my_random, the only data movements come from the instruction memory hierarchy. Each core reads the data it needs from the closest instruction cache holding it. On figure 27, three threads run function my_random on three cores (abbreviated as my_rnd). The three requests to L2 are serialized and it is assumed core 0 is served first. The requested code is loaded in the shared L2 (figure 27, left part). The three requests to L2 are serialized and it is assumed core 0 is served first. The requested code is loaded in the shared L2 (figure 27, right part). The next core to be served is core 1 (figure 28, left part). The memory hierarchy plays a role and L2 hits. Core 2 is the last to be served, directly from L2 (figure 28, right part). 5. RELATED WORKS AND CONCLUSION Parallel programming concerns automatic parallelization (i.e. by the compiler) and "hand-made" parallelization using APIs (Pthreads, MPI, OpenMP for CPUs and CUDA, OpenCL for CPUs+GPUs). In 2012 a survey was published of parallel programming models and tools for multicore and manycore processors [11]. Automatic parallelization parallelizes loops with techniques based on the polyhedral model [12]. If some addresses in a loop are dynamic (e.g. pointer-based), the compiler cannot optimally parallelize. It is also the case if the loop control is complex or if the iteration body contains statically unknown control (e.g. involving special exits via return or break instructions). In [13], some transformation techniques are added to the polyhedral model to remove some of these irregularities. The model we propose assumes that part of the parallelization is hand-made, i.e. structuring the program with template functions replacing all the for and while loops with their functional divide-and-conquer equivalents. Anything else is taken in charge by the hardware without any compiler, library or OS primitive intervention. Parallelization based on OS threads [14] [15] suffers from the overhead of OS primitives, the opacity of the code which must exhibit the synchronizations and communications and its dependency on the number of cores through the explicit creation of threads. A major drawback of OS threads is their non deterministic behaviour, as pointed out by Lee [16]. The parallelizing hardware we propose has low architectural overcost (a few machine instructions run at thread creation, compared to thousands of instructions run in the pthread API). The existing contributions on a hardware approach to automatize parallelization [17][18][19][20] are penalized by the low basic Instruction Level Parallelism (ILP) measured in programs [21]. The hardware based parallelization in [22] overcomes this limitation in 3 ways: (i) very distant ILP is caught when fetch is parallelized, (ii) all false dependences are removed through full renaming and (iii) many true dependences are removed by copying values. The remaining dependences in a run are true ones related to algorithmic sequentialities that the program implements. In such conditions, the authors in [23] have reached a high ILP (thousands), increasing with the data size, on the parallel benchmarks of the PBBS suite [24]. In the hardware design we propose, fetch is parallelized and there is no data memory, i.e. no memory dependences. In such conditions high ILP can be captured in a program run when the program implements a parallel algorithm. The microarchitectural cost of parallelization is reduced because there are less communications, involving only neighbor cores. The proposed programming style avoids data storing which simplifies the hardware, makes parallel computations more independent and uses well-known functional programming paradigm as in parallel Haskell [25]. Instead of computing a data structure globally, we compute its elements individually and in parallel. The efficiency does not rely on the cache locality principle, which applies poorly to a parallel run. Instead, it relies on parallel locality, as defined in [26]. The number of transistors on a chip allows the integration of thousands of simple cores such as the design proposed in this paper. Parallelization should be done fastly and reliably, leading to reproducible computations as the programming model proposed in this paper. REFERENCES
{"Source-Url": "https://hal-lirmm.ccsd.cnrs.fr/lirmm-01302904/file/cc-pe.pdf", "len_cl100k_base": 13237, "olmocr-version": "0.1.53", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 78674, "total-output-tokens": 16359, "length": "2e13", "weborganizer": {"__label__adult": 0.0004799365997314453, "__label__art_design": 0.0006060600280761719, "__label__crime_law": 0.00044155120849609375, "__label__education_jobs": 0.000903606414794922, "__label__entertainment": 0.00011044740676879884, "__label__fashion_beauty": 0.00023603439331054688, "__label__finance_business": 0.0003638267517089844, "__label__food_dining": 0.0005435943603515625, "__label__games": 0.0010900497436523438, "__label__hardware": 0.007091522216796875, "__label__health": 0.0007214546203613281, "__label__history": 0.0005702972412109375, "__label__home_hobbies": 0.0002372264862060547, "__label__industrial": 0.0011243820190429688, "__label__literature": 0.0003037452697753906, "__label__politics": 0.00045108795166015625, "__label__religion": 0.0009126663208007812, "__label__science_tech": 0.1668701171875, "__label__social_life": 8.547306060791016e-05, "__label__software": 0.00635528564453125, "__label__software_dev": 0.80859375, "__label__sports_fitness": 0.0004858970642089844, "__label__transportation": 0.001323699951171875, "__label__travel": 0.00031757354736328125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 59508, 0.02517]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 59508, 0.49353]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 59508, 0.84926]], "google_gemma-3-12b-it_contains_pii": [[0, 966, false], [966, 4035, null], [4035, 8156, null], [8156, 11026, null], [11026, 13981, null], [13981, 17597, null], [17597, 22022, null], [22022, 24882, null], [24882, 28922, null], [28922, 29210, null], [29210, 31689, null], [31689, 35480, null], [35480, 37138, null], [37138, 40336, null], [40336, 43561, null], [43561, 46041, null], [46041, 47930, null], [47930, 50071, null], [50071, 51052, null], [51052, 54495, null], [54495, 59508, null]], "google_gemma-3-12b-it_is_public_document": [[0, 966, true], [966, 4035, null], [4035, 8156, null], [8156, 11026, null], [11026, 13981, null], [13981, 17597, null], [17597, 22022, null], [22022, 24882, null], [24882, 28922, null], [28922, 29210, null], [29210, 31689, null], [31689, 35480, null], [35480, 37138, null], [37138, 40336, null], [40336, 43561, null], [43561, 46041, null], [46041, 47930, null], [47930, 50071, null], [50071, 51052, null], [51052, 54495, null], [54495, 59508, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 59508, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 59508, null]], "pdf_page_numbers": [[0, 966, 1], [966, 4035, 2], [4035, 8156, 3], [8156, 11026, 4], [11026, 13981, 5], [13981, 17597, 6], [17597, 22022, 7], [22022, 24882, 8], [24882, 28922, 9], [28922, 29210, 10], [29210, 31689, 11], [31689, 35480, 12], [35480, 37138, 13], [37138, 40336, 14], [40336, 43561, 15], [43561, 46041, 16], [46041, 47930, 17], [47930, 50071, 18], [50071, 51052, 19], [51052, 54495, 20], [54495, 59508, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 59508, 0.00759]]}
olmocr_science_pdfs
2024-12-06
2024-12-06
638a78249b08c5b50839b9e4aa495771989eece9
Abstract Fast compilation is essential for JIT-compilation use cases like dynamic languages or databases as well as development productivity when compiling static languages. Template-based compilation allows fast compilation times, but in existing approaches, templates are generally handwritten, limiting flexibility and causing substantial engineering effort. In this paper, we introduce an approach based on MLIR that derives code templates for the instructions of any dialect automatically ahead-of-time. Template generation re-uses the existing compilation path present in the MLIR lowering of the instructions and thereby inherently supports code generation from different abstraction levels in a single step. Our results on compiling database queries and standard C programs show a compile-time improvement of 10–30x compared to LLVM -O0 with only moderate run-time slowdowns of 1–3x, resulting in an overall improvement of 2x in a JIT-compilation-based database setting. CCS Concepts: - Software and its engineering → Just-in-time compilers; Translator writing systems and compiler generators. Keywords: MLIR, JIT Compilation, Template-based Compilation, Fast Compilation, Binary Code Patching 1 Introduction Just-in-time compilation is commonly employed to improve the performance of programs, either by speeding up subsequent computations, as in compiling database query engines [14, 24, 28, 31], or by generating an executable for previously unseen code, such as client-side code execution of JavaScript or WebAssembly [5, 19, 34]. In either case, the time it takes to compile the input and generate an executable is counted towards the execution time of the program, directly affecting the user experience. As a result, a key challenge for constructing JIT compilers is reducing the compilation time without ultimately sacrificing code quality. Template-based code generation, where precompiled code fragments are merely combined during compilation, allows for extremely low compile-times. This was previously demonstrated with VCode [16], focusing on quick encoding of machine code through a reduced set of operations with manually written templates for the machine code, and the Java virtual machine Maxine [41], whose baseline compiler combines templates for every bytecode instruction, which were written in Java and precompiled by the optimizing compiler. A more recent approach in this field [42] uses Clang/LLVM [25] to compile templates written in C++ and demonstrates the applicability for larger templates, which leads to shorter compile-times and better run-time performance due to more optimization during precompilation. However, all approaches so far require handwritten templates, which not only come with substantial implementation effort, but the templates need to be maintained in addition to existing lowerings for optimized compilation. We, therefore, propose a template-based compilation approach leveraging MLIR [26], where instructions usually provide a — possibly multi-step — lowering to LLVM-IR for native code generation. This approach allows for automatically deriving templates using the existing lowerings, obviating the need for manual template development or maintenance. Moreover, as single MLIR instructions are not limited to being simple operations, this approach easily allows for templates with complex logic, enabling further optimizations during template pre-compilation and faster compile-times due to fewer templates. Fig. 1 visualizes this approach. Our results show that we can achieve a compilation time speedup of 10–30x over LLVM -O0 with only moderate runtime slowdowns of 1–3x, which results in an overall improvement of up to 2x in a JIT-compilation-based database setting. The main contributions of this paper are as follows: - A framework for extracting the semantics of an arbitrary MLIR instruction from its defined lowering into a reusable template. - A template-based compiler that generates native code from an MLIR program targeting x86-64 and AArch64. - Optimizations for improved register usage and handling of constant values in a template-based compiler. 2 Background: MLIR MLIR [26] is a compiler framework that aims to simplify the creation, transformation, and optimization of intermediate representations in SSA representation. It allows for the implementation of custom sets of instructions, called dialects, where each SSA instruction is made up of input and output values, can contain regions of other instruction sequences in their body (e.g., the body of a natural loop), and has constant attributes to further configure individual instructions (e.g., the value of a constant instruction). These attributes are further classified as inherent (also referred to as properties) or discardable, where discardable attributes may be omitted at any time and, therefore, do not contain essential information for the execution of an instruction. The MLIR framework provides a consistent way for general optimizations (e.g., constant propagation and common sub-expression elimination) and conversions from a higher-level dialect to a lower-level one. For ease of use, MLIR comes with a set of upstream dialects, including the scf dialect for handling structured control flow, the arith and math dialect for mathematical expressions, and the memref dialect for interacting with memory. A common lowering target, optionally with different intermediate dialects, is LLVM-IR. For this purpose, MLIR provides a large part of LLVM-IR as MLIR dialect serving as the target for conversions, which can then be translated to the actual LLVM-IR easily. Recent publications started applying the capabilities of MLIR to various topics going beyond primary machine learning use cases [10, 20, 27, 40] to discover new optimization opportunities when compiling static languages like Fortran, C and C++ [4, 29, 30] and different abstractions for data processing pipelines [13, 21]. 3 Template Generation Based on the input of sample programs, templates are derived and stored for all contained instructions. Further executions for programs from the same domain can now be executed based on the previously prepared templates. 3.1 Instruction Prerequisites Our automatic template generation approach is generally applicable to all dialects independent of their abstraction level—we successfully applied it to low-level dialects, like LLVM IR, as well as very high-level ones, like the ONNX dialect [20]. The only prerequisite is that the lowering of each instruction does not depend on any information coming from outside the instruction itself. Otherwise, it is impossible to process each instruction in isolation and thus not feasible to automatically capture its semantics into a standalone template. Most common instructions adhere to this rule. Nonetheless, there are some exceptions, e.g., the LLVM branching instructions, as they depend on external block labels, and the `alloca` instruction, as its lowering depends on the surrounding scope. Another set of conflicting instructions is the upstream OpenMP dialect, where many instructions are very tightly coupled to their surrounding instruction (e.g., instructions inside/outside a critical section). 3.2 Capturing Instruction Semantics As we want to avoid handcrafting templates, our approach derives the native code templates from MLIR instructions without any manual implementation effort. The main challenge is that MLIR instructions themselves are opaque: their full semantics are only defined in the lowering. To capture the semantics of an instruction into a native code template, we isolate each instruction into a function, provide opaque inputs to the instruction and capture the output using `unrealized_conversion_casts`, which are typically not folded by any further conversions or transformations. Listing 1 shows an example. ```c func.func @add() -> ptr { // inputs %0, %1 = unrealized_conversion_cast to (i64, i64) %2 = arith.addi %0, %1 : i64 // output(s) - return to keep it alive %3 = unrealized_conversion_cast %2 : i64 to ptr return %3 : ptr } ``` Listing 1. Automatically derived abstraction of the `arith.addi` instruction in an internal intermediate state. Inputs and outputs are made opaque for the instruction using `unrealized_conversion_casts`. The symbolic value resulting from the cast applied on the outputs is returned from the function to keep the values and thus computations alive. For instructions with regions, we furthermore place external function calls into every region to encapsulate the behavior. To keep track of region arguments, these are written to memory before the call, and the operands of the terminator instruction are loaded from memory. The opaqueness of the instructions poses another challenge when matching an incoming instruction to a fitting binary template. The lowering of an instruction can be different depending on the operand or result types. For example, the behavior of most upstream instructions from the \texttt{arith} or \texttt{math} dialect depends on the type, which is not necessarily a scalar value but could be a tensor of values. Therefore, a unique template must be generated for each input and output type combination of an instruction, as the lowering may be different for each. The same applies to inherent attributes, which may affect the lowering but cannot be reflected by any of the instruction inputs. For operations that do not adhere to the defined prerequisites, we provide two extension points to support them manually: (a) a custom abstraction can be provided during template generation time; and (b) a custom implementation can be inserted for the run-time compilation of the template. ### 3.3 Lowering & Compilation Next, we apply the dialect-specific conversions on the derived abstraction to lower it to LLVM-IR. In the example, this converts the \texttt{math.addi} into an \texttt{llvm.add} and converts the MLIR native pointer types to their LLVM pendants. Afterward, we need to provide implementations for the opaque inputs and outputs so that we can actually compile the code. A simple but nonetheless effective way is: inputs as memory loads and outputs as memory stores. The values are stored in a value storage, which will later be allocated on the stack and is passed as an argument to the template function. This method allows for efficient addressing with 32-bit offsets instead of arbitrary 64-bit memory addresses and also reuses the natural stack growth and shrinking. As used in [42], the actual locations and offsets are computed during run-time compilation and patched into the templates using addresses of symbols, which result in relocations and, therefore, can be patched during run-time compilation (cf. Sec. 4.2). As a refinement, we make these symbols \texttt{weak} to prevent LLVM from making any assumption about the value being non-zero and use the \texttt{absolute_symbol} attribute to restrict relocations to absolute 32-bit ones, which leads to more efficient code. To allow the composition of the templates and to enable control flow between instructions, we leverage the continuation passing style (CPS) [38] concept: We enforce a tail call to the continuation function at the end of each template using the \texttt{musttail} annotation to transfer control flow to the next template. The continuation is an external symbol whose actual address is patched during run-time compilation. A resulting \texttt{jmp} instruction at the end of the template can easily be detected and omitted when concatenating templates. However, this technique is not applicable for region calls, as those are regular non-tail calls, after which the execution continues inside the template. Instead, these result in regular calls with the value storage pointer passed as an argument. Operands for the region terminator are loaded from the value storage after the call. Listing 2 shows an example of the final LLVM-IR code, which is then compiled as the template. ``` ; external symbol addresses as patchable constants @off_0 = extern_weak global i8, align 1, !absolute_symbol !{i64 0, i64 INT32_MAX} @off_1 = extern_weak global i8, align 1, !absolute_symbol !{i64 0, i64 INT32_MAX} @off_2 = extern_weak global i8, align 1, !absolute_symbol !{i64 0, i64 INT32_MAX} declare void @next(ptr %value_storage) define void @add(ptr %value_storage) ; load operands from memory at patched offsets %1 = getelementptr i8, ptr %value_storage, i64 ptrtoint (ptr @off_0 to i64) %2 = load i64, ptr %1, align 4 %3 = getelementptr i8, ptr %value_storage, i64 ptrtoint (ptr @off_1 to i64) %4 = load i64, ptr %3, align 4 %5 = add i64 %2, %4 ; the operation itself %6 = getelementptr i8, ptr %value_storage, i64 ptrtoint (ptr @off_2 to i64) store i64 %5, ptr %6, align 4 ; call to continuation musttail call void @next(ptr %value_storage) ret void ``` Listing 2. Automatically derived abstraction of the \texttt{math.addi} instruction in LLVM-IR. Operands are memory loads and the result is written back to memory. Offsets into the value storage are represented as addresses of external symbols and patched later during run-time compilation. Control flow is transferred by an enforced tail call. ### 3.4 Binary Format To finally derive a binary code template, the abstracted function is compiled to a binary object — currently limited to the ELF format — using the LLVM optimization and code generation infrastructure at its highest optimization level. ``` add: ; rdi = pointer to value storage movq $off_0(krdi), %rax addq $off_1(krdi), %rax movq %rax, $off_2(krdi) jmp $next ``` Listing 3. Compiled template of \texttt{math.addi} from Listing 2. Offsets into the value storage and continuation address of the template result in relocations (highlighted). We extract the templates by parsing the ELF file, using the text sections as binary code for the template and storing the data sections (e.g., .data and .rodata) to forward them. to the runtime system. Furthermore, we track the relocation entries and identify patchpoint symbols for value storage offsets and continuation calls by their name. We also track other relocations not originating from the framework, as we have to take over some tasks of a run-time linker as well (e.g., patching addresses to other symbols or data sections). A simple template binary for the addi instruction is shown in Listing 3, which only consists of the binary code and some patchpoints, including the continuation address. 3.5 Case Study: Templates for the LLVM Dialect As one example, we look at the application of our approach to the LLVM dialect, which is required to run the benchmarks used during evaluation. For most instructions, we can generate templates automatically without any manual interaction. However, to fully cover the LLVM IR instructions required for the benchmarks, we had to provide some custom implementations: - Branching instructions (Br, Condbr and Switch) are hand-assembled and run-time compilation additionally deals with SSA destruction. - AddressOf requires a custom abstraction, as memory locations are only determined at run-time compilation. - Regions and value attributes of globals are evaluated during run-time compilation and placed into memory, where they can be referenced with AddressOf. - Stack allocations (Alloca) retrieve memory from the value storage (fixed size) or the heap (var sized). Nonetheless, this is a comparatively small amount of effort to spend on such an extensive instruction set. 4 Run-time Compilation of Templates The run-time compilation phase stitches together the pre-compiled templates to produce code for a previously unseen input program. This corresponds to the compilation part of a JIT compiler and is therefore further referred to as compilation. In contrast to the previous stage, it is time-critical as the compilation contributes to the overall execution time. 4.1 Selection As a first step, the input program must be covered with existing templates. To facilitate that, we walk the input program starting at the top-level region in a depth-first manner and find a matching template from our template library for each instruction individually. As described in Sec. 3.2, an instruction can occur with different configurations (e.g., input types or property values) and therefore, for a template to match, it must match the full signature consisting of the operation name, input and output types, the number of regions and the property values. To make looking up signatures as efficient as possible, we store them in a hash map. The used hash is constructed over the operation type and the properties, as the combination of those provides most of the entropy of a signature. Furthermore, an MLIR context ensures that each registered operation type is unique, allowing us to compare the operation type identifier, which is just a pointer, instead of the string representations. 4.2 Instantiation Once the matching template is found, it is instantiated. The corresponding binary code is copied to the designated memory location and the identified patchpoints — mainly offsets into the value storage and continuation addresses — are adjusted to their according values. During copying, we can omit unnecessary jumps between two neighboring templates. All values defined by the current instruction (results and region arguments) are assigned to a slot in the value storage. To keep compile-times low, we allocate the slots during the same pass that generates the native code. In order to reduce the memory usage of the value storage, we track the liveliness of the slots and reuse them once they become free. For performance, however, we do not perform a dedicated liveliness analysis but generously over-approximate the lifetime intervals: The end is defined as the end of the region unless the instruction has only a single use in the same basic block, where the lifetime ends at that instruction. 4.3 Fixup and Wrapper Function As we generate code in a single pass over the input, some addresses or symbols are not known upon their first reference. This mainly happens due to forward references to yet undefined functions, global symbols, or basic block labels. Those locations are tracked and updated as soon as the referenced data becomes available. For all address references — also during instantiation — we take advantage of compile-time information, which can lead to further linking optimizations, e.g., we replace loads from the GOT by directly computing the desired address if in range, saving the space of the GOT entry and the load from memory at run-time. Once code generation finishes, control flow has to be transitioned into the newly generated assembly. Transitioning from our host C++ program to the generated assembly is possible by looking up the address of the generated main function and calling it with the default C calling convention. The function template, which was generated to embody the function declaration, takes care of allocating the initial value storage, saving registers (and restoring them upon return), preparing the value storage pointer argument and ultimately invoking its body. From this point on, our framework does not provide any runtime components during execution. 5 Optimizations 5.1 Constant Evaluation In contrast to LLVM, where constants can be used as values arbitrarily, MLIR conceptually does not distinguish constants and models them as constant instructions, for example, arith.constant for constant numbers. Because the actual values for those constants are stored as attributes, each of those instructions would be recognized as its own template. To avoid a huge number of different constant templates only differing by their value, operations that have no side effects, no regions, no input operands, and no references to any dynamic addresses, can be executed during template generation. During run-time compilation, the results can then be injected on demand using a custom template. This optimization considerably reduces the number of generated templates as all constant-evaluated instructions share a single dedicated template. 5.2 Template Calling Convention So far, every value resides in the value storage in memory and is passed into a template by patching its offset into the instantiated binary. For constants, this causes a store operation of the value to memory, which the template loads again immediately afterward. To improve this, we adjust the handling of inputs and outputs of our templates. Reasonably small input values (up to two registers wide) are passed directly in registers, while larger values remain in memory and are addressed via patched offsets. We achieve this by passing such values as parameters, which are passed in registers by the underlying calling convention, to the function template. Similarly, small output operands are passed as arguments to the continuation function. For larger data types, the handling remains unchanged, as only a small subset of its values is typically used. These are, therefore, better suited for in-memory passing, as the template can specifically access the required elements, avoiding large loads and stores. Listing 4 gives the binary code for the arith.addi example with this optimization. ``` 1 add: ; params: %rdi = value store, %rsi = a, %rdx = b 2 addq %rdx, %rsi 3 jmp $next ; params: %rdi = value store, %rsi = res ``` Listing 4. Compiled template of arith.addi using the optimized calling convention: Input and output values in registers; Continuation address to be patched (highlighted). During run-time compilation, we additionally emit code for loading the values from memory where required and materializing constants directly into registers. Results are written back to memory after each template. Due to the explicit separation of operand loading and computation, memory operands can no longer be fused into arithmetic operations (as happened in Listing 3; loading the second operand is fused into the addq instruction), but this had no measurable performance impact, as most modern x86-64 CPUs split load-arithmetic instruction into multiple micro-ops anyway. As a side effect, this also reduces the template size significantly, as loading/storing values is no longer part of every template. 5.3 Register Caching Even with the previous optimization, the resulting code excessively loads values from memory into registers and stores results back into memory. To reduce the number of memory loads, we cache the result values in registers — in addition to writing them back to the value storage — thus, in many cases, replacing the load from memory with a copy from a register. We can even eliminate the store to memory if the value has its single use in the immediately following instruction. While it is possible to rely solely on callee-saved registers, which are guaranteed to be unmodified by the template, many templates use only very few registers, so other callee-saved registers can serve as additional cache space. During template generation, we, therefore, analyze which registers are clobbered. We obtained this information during template compilation from the LLVM by analyzing the instructions of the final Machine IR for the function. During code generation, we additionally track the cached values in registers and generate code to move result values into and out of such registers. While the additional moves increase the number of instructions, this pays off during execution as we save on loads from memory. We evaluated two different strategies to assign the cache registers. When caching a register and none is available, we either (1) do not cache the value at all or (2) override one of the cached values in a round-robin manner. The round-robin approach was chosen to account for SSA values usually used rather locally and losing importance once the code progresses. But there is no significant difference between them — both save up to 30% of the memory loads. Therefore, our evaluation uses strategy (1) due to its lower compilation time. Cache registers become free if a value reaches the end of its lifetime (cf. Sec. 4.2) or a template clobbers them — in which case we fall back to loading from the value storage. 5.4 Higher-Level Optimizations Further common optimizations for interpreters and template-base compilers include supernode generation [11, 37] and template specialization on certain inputs [22]. However, with a flexible framework like MLIR, we believe that there is no need for such techniques. Instead, one can leverage the multi-level approach of MLIR and apply the code generation on a higher-level, domain-specific dialect. This implicitly creates supernodes, as higher-level instructions are typically more complex and often lowered to several lower-level instructions. A simple example is the scf dialect for structured control flow, which already provides explicit operations for common constructs like while and for loops instead of using plain branch instructions found in the lower-level dialects cf and llvm. Due to the reduced number of instructions and a simpler control flow, the compilation time is also reduced by using higher dialects as a starting point for code generation. 6 Target Architecture Considerations Although the examples so far targeted the x86-64 architecture, our approach does not require a specific architecture. The template generation solely relies on LLVM and thus is capable of generating templates for various architectures. Even internal code generation (e.g., storing register to memory or vice versa) reuses the template compilation approach. Thus, porting the approach to a new architecture requires only moderate effort (e.g., architecture-specific relocations). In addition to x86-64, we currently also support AArch64. A key difference is the fixed-size and, therefore, less flexible instruction set. In particular, constants are often composed through multiple instructions and applying relocations often involves bit-level adjustments to code — in contrast to x86-64, where relocations are generally byte-aligned and continuous. In turn, the fixed length instruction set allows for more straightforward modification of the binary code, thus simplifying optimization of instructions, like replacing GOT entry loads with direct address computation. 7 Evaluation We evaluate our approach on a range of micro-benchmarks and benchmark suites. We assume all required templates are generated and prepared for usage for all benchmarks. This assumption is generally feasible, as the number of potential instructions is inherently limited — as shown by [42]. We compare our approach against different LLVM back-ends. As LLVM back-ends do not operate on MLIR directly but instead use LLVM-IR, we first lower the MLIR input to LLVM-IR. This step is not included in the measurements, as it is already higher than the compilation time with our approach altogether. We then use LLVM ORC JIT, typically with the small code model; only SPEC requires the medium code model. For -O0, we use FastISel as the instruction selector; for optimized compilation, we use -O2, as there are no significant differences to the other back-end optimization levels. Where possible, we compared compiling with our approach from the higher-level upstream dialects and the lower-level LLVM dialect. The MLIR upstream dialect input was derived using the C frontend of Polygeist [29] with optimizations turned off. For the LLVM dialect, we first apply Clang with -O0 to derive LLVM-IR and, afterward, use the MLIRTranslate tool for importing to MLIR. Our x86-64 benchmark platform is an Intel Xeon Platinum 8260 CPU equipped with 160 GiB RAM; our AArch64 platform is an Apple M1 core equipped with 16 GiB RAM; all machines are running Linux and an LLVM development snapshot (commit 5d492766a8). Besides the expensive SPEC benchmarks, all diagrams report the median of ten runs. 7.1 Impact of Optimizations To analyze the impact of the optimizations described in Sec. 5, we compare our approach with LLVM back-end optimization levels -O0 and -O2 as well as the LLVM interpreter on a set of micro-benchmarks. The benchmarks were designed to stress the impact of our optimization on compute-intensive tasks. The Eratosthenes sieve runs in a single function and benchmarks control flow, memory and arithmetic operations. The quicksort benchmark extends on this idea, slightly shifting focus from control flow inside a single function to recursive calls and memory operations. Fibonacci is finally used to show the negligible startup overhead for minimal programs while also indicating the limitations of our approach in programs exclusively bound by function calls. Figure 2 shows the results. The standard deviation of the result remains below 10% for our approach and below 5% for the LLVM levels in the compilation time dimension and below 3% for all approaches regarding execution time. Compared to compiling with LLVM, the compilation times of our approach are an order of magnitude faster compared to -O0, in the range of 32–72x. Run-time performance on the micro-benchmarks is generally comparable and between 2x slower to 20% faster than LLVM -O0. The LLVM interpreter is around 100x slower than all other approaches. Start-up times are generally lower, but even these are outperformed by our approach once. The significant run-time overhead in the Fibonacci benchmark is due to the impact of recursive calls, which prevents register caching as calls conceptually clobber all registers. Our optimization of adjusting templates to primarily use registers and materializing constants reduces execution time by 15%. However, the cost of separately inserting loads/stores increases compilation times slightly by 8%. Extending it with our register caching strategies significantly reduces execution time by another 26% (37% over baseline). Nonetheless, generating the additional instructions and tracking registers has a compile-time cost of 10% (19% over baseline). Within our approach, starting from a higher abstraction level (first row) is clearly favorable as such programs generally contain fewer and more complex operations and, therefore, cause less work during compilation and allow for better code inside the templates. 7.2 LingoDB — an MLIR-based Database Engine Compiling database engines is an important application area of JIT compilation, as the compilation time fully counts toward the overall processing time of a query. LingoDB [21] is an MLIR-based query execution engine that lowers SQL queries through a declarative top-level dialect, on which it also performs query plan optimization, towards the LLVM dialect to compile queries to native code. We replaced the last lowering stages to generate native code with our approach directly. This technique is applied on two levels: the lowest LLVM-IR level and one above, consisting of upstream dialects and a LingoDB-specific utility dialect. We use TPC-H [39] (scaling factor 0.1) as a typical small-sized workload. As our code generation approach is intended as an unoptimized tier, an adaptive execution pipeline could switch to optimizing back-ends for larger data sets. Fig. 3 shows the results comparing the existing LingoDB modes speed (-O2) and cheap (-O0) with our approach. The standard deviation for the LLVM execution stages is about 5% for execution and compilation, while our approach deviates by 20% on the compilation time dimension due to the significantly lower absolute values, and 10% (upstream dialects) to 20% (LLVM IR dialect) on execution time. When compiling from the higher-level dialects, our approach generates code an order of magnitude faster than LLVM -O0, taking about one millisecond per query. As trade-off, the mean execution time increases by 40% (3% (Q14) to 76% (Q17)) compared to LLVM -O0. Nonetheless, when accounting for both stages, the time to compile and execute a TPC-H query is reduced by 43% with our approach compared to LingoDB cheap and by around 4x compared to LingoDB speed. Considering that query execution is usually multi-threaded, speedup further increases, as parallel execution can only start after single-threaded compilation finishes. When comparing with LLVM -O2, the execution time is 108% higher (35% (Q15) to 200% (Q17)). However, the time spent generating optimized code does not amortize at such small data sets. Similar to the micro-benchmarks, starting from the higher-level dialects improves performance in both compilation and execution time. Although the template-based compilation approach of [42] also supports a subset of the TPC-H queries, it is difficult to compare the two directly, as LingoDB provides more optimized operator implementations and an improved query planner. Even for TPC-H Q6, where the query plans are identical, our approach executes around 2x faster — both run on our machine. For other queries, where query optimization is important, the run-time differences are orders of magnitude. The compilation time of our approach, however, is 10x larger; we elaborate on the details later in Sec. 7.5. Nonetheless, a code generation time in the order of milliseconds is sufficient, as execution times of the remaining stages in the query execution pipeline — query optimization and prior MLIR lowerings — take multiple milliseconds anyway. 7.3 PolyBenchC In addition to JIT compilation, our approach can also be used for the static compilation of languages like Fortran and C, where short compile-times are essential during development. We first evaluate our approach on the PolyBenchC [36] benchmark suite, a widespread example of polyhedral optimization techniques. Figure 4 shows the results. The standard deviation of the metric remains below 10% for execution and compilation time with all approaches. Compiling from the upstream dialect input not only provides faster compilation than starting from the LLVM dialect but also the execution time difference is very apparent for these programs, as the canonicalized representation using MLIR upstream dialects is more concise and expressive when it comes to numeric kernel computations. Compilation is one to two orders of magnitude faster with our approach on both abstraction levels compared to LLVM; the execution speed ranges from a slowdown of 165% (jacobi-2d) to a 20% speedup (jacobi-1d) compared to LLVM -O0 (median slowdown: 34%). A comparison with the approach of [42] is only partially possible. Although they run the same benchmark suite, their evaluation targets compilation from optimized WebAssembly code, in contrast to a less optimized and more high-level representation as we do. The closest we can get is instead of starting from unoptimized input code to use Clang optimization level -O3 to derive an optimized input representation similar to how the WebAssembly input was already optimized once. However, as our input still remains comparably high-level, we miss any possible back-end optimizations that might be applied during WebAssembly emission. Again, we reevaluated their results on our machine. The execution time difference with the optimized input ranges between a slowdown of 10x and a speedup of 2x for our approach compared to theirs. However, we generate code in half the time required for their approach, again taking advantage of the concise representation of higher-level IRs. 7.4 CoreMark and SPECint 2017 To show the full extent of the LLVM-IR coverage, we run CoreMark [1] and SPEC CPU 2017 Integer [2] benchmarks. In all cases, we could not derive MLIR code from the C sources using Polygeist due to its limited functionality and, therefore, only measured the LLVM-dialect level. The Fortran benchmark (SPECint 548.exchange2) was compiled using Flang [4], which can output the LLVM MLIR dialect directly. Some SPEC benchmarks (500.perlbench, 502.gcc, and 531.deepsjeng) could not be transformed into MLIR at all, or the resulting MLIR led to a timeout or crash, even when compiling with the LLVM back-ends. Furthermore, we excluded the other C++ benchmarks due to their use of C++ exceptions, which we currently do not support. Due to limited memory on our AArch64 machine, we could only run one of the reference executions of SPEC 557.xz on AArch64. We report the median from three compilations/executions of the reference workload in Fig. 5. All results have a standard deviation of less than 5% for both dimensions with all approaches. Our approach generates code one to two orders of magnitude faster than LLVM, while execution time is 2–3x slower than LLVM -O0. In particular, our register caching optimization strongly impacts these benchmarks, leading to a runtime improvement of nearly 2x. On AArch64, the relative compile-times closely follow the ones on x86-64, while the execution time slowdowns are slightly higher on AArch64 for 525.x264 and 548.exchange2. 7.5 Compile-time Analysis 7.5.1 Template Generation. Template generation happens ahead-of-time and is therefore not considered to be time-critical. Table 1 lists the numbers, sizes, and generation times of templates used for previous evaluations. They were obtained by timing the template generation stage and inspecting the resulting template library. The number of templates for the SPEC benchmark suite seems comparably high but mainly consists of 1400 func and 1300 call templates, whose signature contains the respective symbol names, as well as 2000 getelementptr templates, as constant indices are stored as properties of the instruction. By providing a custom template implementation for getelementptr and ignoring the symbol name — only one template for all functions with the same function signature — for func (reduced to 280) and call (reduced to 400), we could reduce the number of required templates down to about 1000. Compared to [42], we generate a lot less templates, as we only generate a single variant for one instruction signature, contrary to multiple variations as required for their register allocation scheme and supernode construction. Additionally, their reported template generation time is in the order of minutes, significantly slower than our approach. 7.5.2 Run-time Compilation. At its core, template-based code generation strives for very low compilation times. To provide further insights into where the time of the time-critical compilation stage is spent, we instrumented our compiler with additional time measurements — Figure 6 shows the most apparent components. The most significant part is spent on the template instantiation (cf. Sec. 4.2), followed by template selection (cf. Sec. 4.1). The latter could be avoided completely by directly mapping the input MLIR instructions to their corresponding signatures. This mapping could be done using perfect hashing (used by [42]); however, this would preclude dynamically adding further templates. Tracking the current storage location for each value via a hash map is also comparably expensive; this could be optimized by storing the location inline with the value, but MLIR does not support attaching custom information to a value. Finally, evaluating global constants also takes up some compilation time because they must be evaluated before generating code for them, as described in Sec. 3.5. | Microbench. | 72 | 1 | 20 | < 1 | | PolyBenchC | 411 | 8 | 112 | 2 | | LingoDB | 1735 | 11 | 466 | 10 | | SPEC | 5033 | 122 | 1390 | 37 | The remaining time is spent reflecting for each input MLIR instruction on its type, the input operands, the result values and regions to configure template instantiation correctly. Operands must be put into the expected register, or its storage offset must be recalled, result values are assigned to slots, stored and cached in registers, and the memory offsets for region arguments, as well as terminator operands, must be recorded. Profiling indicates that a substantial portion is spent directly on the MLIR reflections. This can be avoided by statically providing the information during the compilation of the framework, which is only possible for a restricted, previously known domain, contrary to what MLIR provides. In summary, taking advantage of the flexibility provided by MLIR comes with some costs and currently limits major compile-time improvements, motivating further performance improvements in MLIR in the future. 8 Discussion and Future Work The results show that our approach achieves an order of magnitude faster compile-times than LLVM -O0. Although execution times are 3x higher on some benchmarks, they are fairly close to and sometimes even as fast as the existing baselines on other benchmarks. Notably, the combination of massively reduced startup time with not too much slower execution enables effective use as baseline JIT compiler. Using MLIR as a starting point allows us to target a continuously growing and open ecosystem, reducing complexity as higher-level optimization (e.g., supernodes) can easily be reflected in MLIR’s multi-level approach. Automated template generation obviates the required effort for implementation and maintenance without compromising on the promised run-time to compile-time tradeoff. MLIR’s flexibility, however, does come at some cost: the compilation times cannot always keep up with a previously presented Copy-and-Patch approach [42], which compiles roughly between 2x slower to 10x faster. Nevertheless, we argue that the minimal differences in absolute terms — hardly result in reduced latencies in end-to-end scenarios and, therefore, are not worth the effort of manual template construction. In turn, our code can keep up with — if not improve on — execution speed. Additionally, our approach is not limited to JIT use cases: we can provide a reasonable compile- to run-time tradeoff for almost arbitrary dialect inputs, including LLVM-IR. Our approach can offer a fast compilation tier that is at least applicable in development scenarios, where most of the time is spent on compiling code, of which only a small snippet is ever executed. However, to fully cover this scenario, there is some work to do on emitting debug information and adding supported binary formats (e.g., Mach-O and PE). In all cases, our approach is applicable without major changes to the currently existing MLIR infrastructure: it uses the same input representation that is also used for regular compilation, and all dialects with a lowering to LLVM-IR can be, at the very least, supported on LLVM-IR dialect level. 9 Related Work The most recent and most similar stand-alone, template-based compiler implementation [42] uses templates written in C++, which are precompiled to machine code using LLVM. When compiling a program, the framework combines the templates for the operations and applies some cheap optimizations (e.g., jump elimination). For further performance improvements, they employ a simple register allocation schema limited to usage inside expressions by providing templates for multiple possible register assignments. Our approach, in contrast, generates templates automatically instead of manual C++ programming. Furthermore, our register caching is generally more flexible and not limited to expressions. Therefore, it is more effective beyond micro benchmarks and improves the execution of large programs. Historically, one main application of template-based code generation was dynamic code optimization on run-time invariants. Vcode [16] was one of the first template-based code generation systems focusing on fast compilation. It provides a reduced platform-agnostic instruction set that is translated to machine code by merely combining hand-assembled templates for each of their operations. It was employed in the TCC compiler [17, 35] for dynamic run-time compilation. Another approach [6] also targeted dynamic code specialization, which combines prepared machine code templates and fills missing holes for dynamic constants during run-time compilation. Consel et al. [12, 32] generated templates from C code in combination with the code required for their instantiation. They located patchpoints in the object files using block labels and introduced the idea of using external symbol addresses to model unknown run-time constants [32]. These approaches target computationally intensive kernels and require handwritten templates, annotations, or specialized compilers. Our approach does not prioritize run-time performance and is functional without offline program preparation. A more recent application of template-based code generation is found in the initial version of QEMU [8]. Guest instructions were mapped to a set of micro-operations, which were implemented as hand-crafted templates that can be combined to generate target code. Templates were written in GNU C, making use of special GCC flags for specialized register assignments; they reused the idea of external symbol addresses for run-time constants [18]. Nonetheless, this approach was later dropped in favor of raising the input instructions to the TCG intermediate representation [9]. Template-based code generation is nowadays present in baseline compilers for adaptive execution [5, 7, 41] or lightweight assemblers [3, 23, 33]. Both applications differ from our approach as they operate on byte or native code inputs, whereas we generate code from a high-level representation. 10 Summary In this paper, we outlined a template-based code generation approach for MLIR. Our template generation leverages existing lowerings of MLIR instructions through LLVM and thereby overcomes the limitations of state-of-the-art approaches, which require explicit handwritten templates. Our results show performance improvements regarding compile-time compared to the existing LLVM -O0 pipeline in the 10–30x range. Run-time is typically slower by 1–3x, but it even provides comparable or improved performance on a few programs. Our approach can be integrated into existing MLIR workflows with moderate effort and provides a fast compilation tier with only slightly slower execution. Data Availability Statement The sources for our template-based MLIR compiler and the respective benchmark data are available in Zenodo [15]. References Received 13-NOV-2023; accepted 2023-12-23
{"Source-Url": "https://home.in.tum.de/~engelke/pubs/2403-cc.pdf", "len_cl100k_base": 9311, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 40375, "total-output-tokens": 13474, "length": "2e13", "weborganizer": {"__label__adult": 0.0003921985626220703, "__label__art_design": 0.00027942657470703125, "__label__crime_law": 0.0002791881561279297, "__label__education_jobs": 0.00030112266540527344, "__label__entertainment": 5.27501106262207e-05, "__label__fashion_beauty": 0.00016176700592041016, "__label__finance_business": 0.00017499923706054688, "__label__food_dining": 0.00037384033203125, "__label__games": 0.0005311965942382812, "__label__hardware": 0.0014362335205078125, "__label__health": 0.0004248619079589844, "__label__history": 0.00023603439331054688, "__label__home_hobbies": 8.666515350341797e-05, "__label__industrial": 0.0004391670227050781, "__label__literature": 0.0001761913299560547, "__label__politics": 0.0002918243408203125, "__label__religion": 0.0005612373352050781, "__label__science_tech": 0.01189422607421875, "__label__social_life": 6.073713302612305e-05, "__label__software": 0.003875732421875, "__label__software_dev": 0.9765625, "__label__sports_fitness": 0.00034117698669433594, "__label__transportation": 0.0006198883056640625, "__label__travel": 0.00023031234741210935}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 55786, 0.05401]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 55786, 0.26002]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 55786, 0.87209]], "google_gemma-3-12b-it_contains_pii": [[0, 3109, false], [3109, 8466, null], [8466, 13983, null], [13983, 19507, null], [19507, 25269, null], [25269, 29155, null], [29155, 33395, null], [33395, 37282, null], [37282, 40390, null], [40390, 46221, null], [46221, 53753, null], [53753, 55786, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3109, true], [3109, 8466, null], [8466, 13983, null], [13983, 19507, null], [19507, 25269, null], [25269, 29155, null], [29155, 33395, null], [33395, 37282, null], [37282, 40390, null], [40390, 46221, null], [46221, 53753, null], [53753, 55786, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 55786, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 55786, null]], "pdf_page_numbers": [[0, 3109, 1], [3109, 8466, 2], [8466, 13983, 3], [13983, 19507, 4], [19507, 25269, 5], [25269, 29155, 6], [29155, 33395, 7], [33395, 37282, 8], [37282, 40390, 9], [40390, 46221, 10], [46221, 53753, 11], [53753, 55786, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 55786, 0.01835]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
931c01bd8733cfbdd960af09975ef2ac54163289
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-00310123/file/cbv-mixins.pdf", "len_cl100k_base": 11555, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 59934, "total-output-tokens": 13958, "length": "2e13", "weborganizer": {"__label__adult": 0.0003209114074707031, "__label__art_design": 0.0003066062927246094, "__label__crime_law": 0.0002498626708984375, "__label__education_jobs": 0.0004880428314208984, "__label__entertainment": 5.561113357543945e-05, "__label__fashion_beauty": 0.00013577938079833984, "__label__finance_business": 0.0001590251922607422, "__label__food_dining": 0.0003440380096435547, "__label__games": 0.00040340423583984375, "__label__hardware": 0.0004992485046386719, "__label__health": 0.0003743171691894531, "__label__history": 0.00019407272338867188, "__label__home_hobbies": 7.998943328857422e-05, "__label__industrial": 0.00032806396484375, "__label__literature": 0.0002791881561279297, "__label__politics": 0.0002267360687255859, "__label__religion": 0.00046324729919433594, "__label__science_tech": 0.00807952880859375, "__label__social_life": 8.672475814819336e-05, "__label__software": 0.00385284423828125, "__label__software_dev": 0.982421875, "__label__sports_fitness": 0.0002620220184326172, "__label__transportation": 0.0004198551177978515, "__label__travel": 0.0001703500747680664}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46875, 0.0151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46875, 0.4553]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46875, 0.81386]], "google_gemma-3-12b-it_contains_pii": [[0, 985, false], [985, 3426, null], [3426, 6260, null], [6260, 10047, null], [10047, 13174, null], [13174, 15905, null], [15905, 19646, null], [19646, 22461, null], [22461, 26238, null], [26238, 29761, null], [29761, 31623, null], [31623, 34218, null], [34218, 37527, null], [37527, 40768, null], [40768, 43702, null], [43702, 46875, null]], "google_gemma-3-12b-it_is_public_document": [[0, 985, true], [985, 3426, null], [3426, 6260, null], [6260, 10047, null], [10047, 13174, null], [13174, 15905, null], [15905, 19646, null], [19646, 22461, null], [22461, 26238, null], [26238, 29761, null], [29761, 31623, null], [31623, 34218, null], [34218, 37527, null], [37527, 40768, null], [40768, 43702, null], [43702, 46875, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46875, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 46875, null]], "pdf_page_numbers": [[0, 985, 1], [985, 3426, 2], [3426, 6260, 3], [6260, 10047, 4], [10047, 13174, 5], [13174, 15905, 6], [15905, 19646, 7], [19646, 22461, 8], [22461, 26238, 9], [26238, 29761, 10], [29761, 31623, 11], [31623, 34218, 12], [34218, 37527, 13], [37527, 40768, 14], [40768, 43702, 15], [43702, 46875, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46875, 0.00279]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
77ae4f5f6a6bb6ebd323792675ec375da056822e
Verification of Concurrent Software B. Tech. Seminar Report Submitted in partial fulfillment of the requirements for the degree of Bachelor of Technology by Prathmesh Prabhu Roll No: 06005002 under the guidance of Prof. Supratik Chakraborty Department of Computer Science and Engineering Indian Institute of Technology, Bombay Mumbai ## Contents 1 Introduction........................................... 1 2 Putting it into perspective.................... 3 2.1 Formalization.................................. 3 2.2 The twin schools of reduction and refinement .................................. 4 3 Practical Approaches to Parallel Program Verification.............. 6 3.1 Type safety for concurrency.................. 6 3.1.1 Type System for Race Freedom........ 7 3.1.2 Type System for Atomicity............ 9 3.1.3 Type Inference..........................10 3.2 Model Checking................................11 3.2.1 Method View Consistency.............. 12 3.2.2 KISS - Keep It Simple and Sequential............... 13 3.2.3 Iterative Context Bounding........... 15 4 Conclusion......................................... 18 Abstract Increasing complexity and widespread use of concurrent programs coupled with the pervasion of software systems handling diverse costly, heavily loaded and safety critical equipment has led to the need for benchmarking multithreaded software systems and verification of their reliability. This seminar surveys some of the recent approaches to practical software verification. 1 Introduction The importance of software verification was brought to the foreground by a number of instances of failures of costly and safety critical systems due to unforeseen and often trivial seeming bugs in the system software. Quite a few famous disasters that occurred as a result of software malfunctioning may be quoted that justify software verification as a central pursuit in academia as well as industry.[?] In November 1985, a 2-byte buffer overflow led to a discrepancy in the Bank of New York securities issue that involved mismanagement of securities worth 20 billion USD. A Russian mars probe was ordered in September 1988 to commit suicide by the ground control when they mistakenly transmitted one wrong character in a message that was in entirety some 20 pages long. A small bug in a switching station software upgrade by the AT&T led in 1990 to a malfunctioning of the entire New York CCS7 network till the upgrade was retracted and the company forced to apologize and compensate its clients. Besides such instances of costly failures, there have even been instances of loss of life as in the case of Therac-25 instruments where programming errors led to death of at least 3 people by overdose of radiation during treatment using these machines. It can thus be seen that there are enough examples justifying the need of robust and more reliable software systems as we ever increasingly rely on autonomous equipment in critical applications. Till date, manufacturers have relied on rigorous testing as the primary approach to produce reliable software, but it is an accepted fact that testing is scarcely good enough to guarantee quality. Over the years, it has been found that programmers on average make 50 errors every 1000 lines of code, and even after thorough testing, finished and well tested software has about 3 errors per 1000 lines lurking in the code on average. Considering the increasing complexity and size of code that today’s software consists of, this directly implies a plethora of bugs inherent in the system despite the usual testing measures. Quoting Dijkstra: “Testing can be a very effective way to show the presence of bugs, but it is hopelessly inadequate for showing their absence, never their absence”. These observations clearly call for a more complete software verification protocol. A recent development in the field of programming methodology (or programming art!) is the shift towards concurrent programming. Concurrent programming can be succinctly defined as “the simultaneous execution of multiple interacting computational tasks”. As the processors’ capability curve seems to be flattening out at length, using multiple threads of execution is fast becoming an alternate means of increasing computational efficacy and speed. Besides multithreading, advent of multiple processor machines and distributed computing have added to the allure of concurrent programming. With more and more software employing heavily multithreaded programs, software verification also needs to adapt to the changing needs. In this seminar, we study some of the more recent approaches towards treating this problem of verifying concurrent programs in perspective of the overall theory of program verification as developed over the last few decades. Organization of the Report This report discusses a few recent approaches to this problem of software verification in reference to concurrent programs. These approaches treat the issue of concurrency as the central theme of development while building on top of the traditional sequential program verification techniques, learning from the decades of experience in program verification that was mainly limited to sequential software. To put things in perspective and to aid the understanding of the underlying concepts of program verification, we start with a review of the theoretical foundations. Section 2 explains the historical treatment of the problem of verification that lead to the conception of a new branch of computer science. It then goes on to treat how these concepts were extended to the world of parallel programs as early as the 70s and touches upon the different schools built around the way of reasoning about concurrent programs. Section 3 discusses some recent and markedly different approaches to the said problem. A striking commonality of these treatments of the problems is the stress laid on finding practically implementable and efficient solutions to the problem, which stems from the fact that these verification softwares are very much in need today in the industry. We start off by discussing in detail an approach that deals with the problems raised by concurrency by statically type checking programs and declaring them to be type safe. We then move on to other more testing like approaches that involve model checking programs by various means. As the state space of a concurrent program is infinite even with finite program length and data, novel approaches to making these programs amenable to model checking are discussed. Finally Section 4 concludes with a writer’s commentary on the methods discussed and the insight gained through the exercise. 2 Putting it into perspective Before venturing into the different ways in which concurrent programs may be verified, it is important to understand the meaning of program verification and also the theoretical framework on which all the approaches rest. When we talk about “verifying a program” we essentially claim to be able to decisively show that the program has a few well laid down properties. In program verification, these properties are usually given in the form of assertions, or claims that certain of the program variables have the given values, or satisfy a given relationship at some moment in the execution of the program. Hence, one may claim the following: If a given set of mathematical relationships possibly involving the program variables, holds before the execution of the program, and assuming that the hardware and other variables do not alter the execution of the program, a given set of mathematical relationships may be claimed to be always satisfied after the running of the program. Program verification essentially tries to either formally prove or otherwise guarantee such claims about programs under consideration. An important point to note here is the assumption that factors beyond the written code and the inputs given do not affect the execution of programs. This assumption is fundamental in order that program results are reproducible and is inherent in the theory and practice of formal verification. It is of course possible to question this very assumption and hence the purported effectiveness of verification. This seminar, however, refrains from the discussion of this issue and moves forward with the assumption that program verification is indeed a topic worth the notice it has. Hoping for the best, we assume that no alpha particle will strike the machine running a software verified by one of the discussed techniques, flipping an important register & leading to utter failure of the whole system! 2.1 Formalization The above stated idea of verifying assertions was formalized for programs by C.A.R Hoare conceptuating the field of program verification. Hoare logic is a formal logic whose syntax captures the semantics of a programming language which is then used to infer out theorems that are programs annotated with associated assertions. Hoare logic formulae are of the form $PQR$ where $P$ is an assertion that must hold before the execution of the program $Q$ and $R$ is an assertion that can then be proved to hold after the execution of $Q$ provided that $P$ was true before the execution. The proof technique relies on breaking down the given program into individual statements such that the overall assertions about program correctness rely on some properties (invariants, as for loops) that are maintained by these smaller pieces of code. Now, if these properties can be proved for the smaller code, the logical framework provides a way to combine these assertions and infer properties about the larger code block from the assertions proved for smaller parts of the program. In this way a proof for the whole program can be built. Hoare logic was later shown to be applicable by Hoare himself and others to generate proofs of parallel programs. As treated by Owicky & Gries, the formal system could handle parallel programs if augmented with the cobegin and await syntactic constructs. Programs running in parallel and using synchronization primitives could then be encoded in Hoare logic and proved to be correct by proving the correctness of independent programs (assuming they run independent of others) and then showing the independence of assertions in one of the programs to the execution of other programs. Owicky & Gries called this independence the property of being interference-free. Besides assertions about specific program variables, the questions of termination of the program and freedom from deadlock can also be treated within this formal system. 2.2 The twin schools of reduction and refinement As the program size and thread interactions grow, formal proofs of correctness tend to become cumbersome. In order to reason about properties of practical programs, some representational shortcut for program proving is essential. We find here the dichotomy of bottom-up and top-down reasoning in proof techniques. Doeppner[?] suggested the technique of refinement to prove large concurrent programs. We begin by proving the correctness of the given program assuming the whole program to be atomic, i.e. we assume that the whole program takes place in one step and hence there is no question of interference due to other programs running concurrently with this program. Having proved the correctness thus, we refine our proof by degenerating the earlier assumption of atomic execution into parts of program that are assumed to be atomic while the program comprises of these one step parts. The essential step here is to be able to choose the refinement in such a way as to preserve the earlier proof of correctness or so that the earlier proof can be modified so as to accommodate the possible interleaving of programs due to this refinement - the idea of an expansion being consistent. By repeated applications of refinement, we aim to reach a point where the only steps assumed to be atomic are indeed executed as one step instructions by the underlying hardware, and hence our proof is now applicable to the executing program. A complimentary approach is that of reduction as suggested by Lipton[?]. Here we begin by treating the given program P and assume certain statements to be uninterruptible in order to prove the desired properties for the program. Let R be such a statement. If it be the case that R is in reality interruptible, i.e. its execution may be interleaved with the execution of some instruction in some other program possibly affecting the effect of R, then our proof of correctness of P need not be correct (as our assumption was ill founded). But if it can be shown that the P has the same properties even if R was actually interruptible as otherwise, then our proof of correctness goes through. Let P/R is the program obtained from P by assuming R to be uninterruptible. Then, we want to prove: P has a property S iff P/R has the same property. All reductions P/R for which this statement holds are called D-reductions. Many important properties such as halting are preserved in D-reductions and these reductions are used extensively in the techniques described in this report. D-reduction Since D-reductions are used extensively by the techniques discussed below, let us delve a little deeper into the theory. Let programs P and Q be running in parallel and p₁ p₂ p₃ p₄ q₁ q₂ p₅ q₃ ... be an execution trace where p₁ is the first instruction of P and so on. An instruction pᵢ of P is a left mover if in the execution trace, interchanging it with an instruction qⱼ preceding pᵢ in the original trace leaves the required properties of the program execution unchanged. An instruction is defined to be right mover similarly. Now, reducing $S_1,S_2,S_3,\ldots,S_k$ to an uninterruptible form $[S_1,S_2,S_3,\ldots,S_k]$ is a D-reduction if $S_1,S_2,S_3,\ldots,S_k$ consist of $S_1,S_2,S_3,\ldots,S_{i-1}$, $S_i$ and $S_{i+1},\ldots,S_k$ such that $S_1,S_2,S_3,\ldots,S_{i-1}$ are right movers and $S_{i+1},\ldots,S_k$ are left movers. Such a reduction preserves the proof of correctness of the given program. Essentially, if some instructions can be translated to the right without changing the effect on required properties of execution and following them are instructions that can be translated left, then the whole code block can be assumed to be executed together, because to any execution where it is not the case, the above translations can be applied resulting in an execution that has all these instructions together and that preserves the required properties from the original execution. It may be noted that acquiring a lock is an example of a right mover action while releasing a lock is a left mover, a fact that is used extensively in practical proof methods. Equipped with this background into program proving and the extensions necessary using the dual approach of expansion or reduction, we may now venture into practically feasible methodologies to tackle the problem. 3 Practical Approaches to Parallel Program Verification We have so far discussed theoretical concepts involved in proving parallel programs. It is necessary to note that these techniques are not applied directly for proving large software systems. Coming up with proofs for programs is a fairly involved exercise as can be realized by trying to prove even a slightly non-trivial program using the basic methods. For the purpose of proving software systems practically, we need autonomous systems that can reason and/or verify program correctness. We discuss some such approaches shortly. All these methods rely heavily on the theory developed so far but do not directly apply them to get formal proofs. Different methods to concurrent program proving can be said to loosely confirm, among others, to one of following three approaches. - The deductive approach follows the theory developed most closely in trying to come up with static pre-runtime proofs of the correctness of programs guarantying their safe execution. The first of the approaches discussed builds a type system that provides compile time guarantee of program correctness and falls into this category. - Another approach to program verification that resembles “testing” is model checking, where we attempt to exhaustively reason out all possible execution paths of a parallel program, trying out all possible interleavings and ensuring that the provided properties hold for all these executions. For reasons given, an exhaustive search is not possible for parallel programs and different methods handle this problem of state space explosion in different ways. - Lastly, a third strain of verification proposes a pre-emptive approach called Transactional Memory. This stream of verification techniques relies on providing an abstract memory model which provides ways to define interruption free functions and variables that can be assumed to work correctly even in the presence of other processes. The onus of maintaining the atomicity and other properties is relegated to the implementation of the TM and is handled using well developed methods and insights learned from database transactions either in software (Software Transactional Memory) or hardware (Hardware Transactional Memory). We do not discuss these methods in this report. 3.1 Type safety for concurrency A typical type system aims at formalizing a given set of constraints on the values that a variable can take and the way that variable is treated in the context of the program. A similar idea can be extended to cover concurrency, i.e., develop a type system that ensures certain properties relating to concurrency for type safe programs. The properties we treat here are race freedom and atomicity. Race Freedom vs. Atomicity It is important in our current context to distinguish between a method (function) that is free from races and one that is atomic. To be free of race conditions means that two or more processes executing at the same time may not access any shared variable at the same time. Let A and B be two processes that share a common variable V. Then, read_A(V) and write_B(V) occurring at the same time leads to a race. Races are unacceptable because the value as read by A in this case can be a corrupted value as B might be halfway in its write execution when A read the value. Atomicity on the other hand means that any execution of process A can be assumed to have happened in one single step and the actual interleaving of process A with B does not change the effects of the execution of A. It is interesting to note that although atomicity and race freedom often occur together in programs, neither is implied by the other as shown by the example given in the figure below. The program in fig 1a is race free but is not atomic as cur may become outdated by the time bal is updated, leading to incorrect update to balance, while that in fig 1b is atomic despite having a possible race condition in the function read. ``` int read(){ synchronized this {return bal;} } int add(int val){ int cur = read(); synchronized this{ bal = bal + cur; } } ``` *fig 1a.* ``` int read(){ return bal; } int add(int val){ synchronized this{ int cur = read(); bal = bal + cur; } } ``` *fig 1b.* ### 3.1.1 Type System for Race Freedom[?]? Concurrent access control is handled in Java by the use of locks that must be held before accessing the guarded shared variables. As only one thread at a time can hold a given lock, this ensures that concurrent updates do not occur. A type system that verifies that correct locking is observed does the dual job of giving a way of formally specifying as well as checking locking discipline in programs. *The Annotation:* In order to specify what locks must be held at what point in the program, all variables at declaration also declare what locks guard them. This is done using the keyword `guarded by`. A variable thus declared must then be accessed in the program only if the required locks are held by the thread. Similarly, a method declares the locks that it requires to be held whenever a call to it is made using the keyword `requires`. ``` field ::= \[final\]opt type field_name guarded_by lock = expression meth ::= type method_name arguments requires lock_set { expressions } defn ::= class class_name<ghost_var*> body ``` In order to provide flexibility with respect to these annotations, classes can be defined with `ghost variables`. Ghost variables are locks passed as parameters during instantiation. The correct locks are then translated to the `guarded_by` and requires clauses. Besides, every class has its own lock that we refer to by `this`. Finally, some classes can be defined to be thread local in order to ease working with sequential parts of the code. We talk about these classes in some detail later. The Type system: Having annotated the code thus, it can now be checked by type inferencing whether each of the lock constraints declared are indeed followed in the code on field accesses. The core of this system is a set of rules of the type \[ P ; E ; ls \vdash e : t \] Where \( P \) is the program under consideration, \( E \) is the environment that provides the types for free variables in \( e \) and \( ls \) is the set of locks currently held in program execution. Then, this rules states that \( e \) can be inferred to be a valid expression of type \( t \). Let us look at an example of a type rule in this system that also highlights an important implementation issue. The rule type checks a reference to a field \( fd \) of a class \( c \) in the program \( P \). \[ \begin{array}{l} \text{[exp ref]} \end{array} \] \[ P ; E ; ls \vdash e : c \] \[ P ; E \vdash ([final]opt \ t \ fd \ \text{guarded by} \ l = e') \in c \] \[ P ; E \vdash [e/this]l \in ls \] \[ P ; E \vdash [e/this]t \] \[ P ; E ; ls \vdash e.fd : [e/this]t \] This rule checks that \( e \) is a well typed expression of type \( c \) where \( c \) is a class define in the program. It then checks that \( fd \) is a field in class \( c \) of type \( e \) guarded by \( l \). Next, it checks that the lock \( l \) is held at this point in the program. To do this, it replaces all instances \( e \) in the expression for \( l \) by `this`, to get to a form in which the locks would have been defined inside \( c \) and then checks for the membership of \( l \) in \( ls \). Finally, it checks that \( t \) is a well defined type. If all these checks go through, then the type checking for the inferred expression also goes through. A Type safe program thus guarantees that the proper locks are held at all accesses to variables and no race conditions occur. One last note worth making is the introduction of thread local classes in the type system. These are classes that are not shared by threads and hence the annotational effort can be spared for such classes. So as to maintain the race freedom of programs, a clear distinction is necessary between the shared and thread local classes. Shared classes are classes that have a shared super class and have only sharable fields. A thread local class on the other hand has thread local fields and may inherit from either type of classes. A thread local class instance being treated as an instance of a shared super class can not be downcast to a thread local type in a share method. 3.1.2 Type System for Atomicity The type system developed by Flannagan and Qadeer\[7\] for Atomicity builds upon the type system just described. In addition to the guard annotations, we now specify atomicity properties to be satisfied by the procedures in the program and then verify that these atomicity conditions are indeed observed by the implementation. For this, we first develop a framework for atomicity types of program code. Based on the theory of D-reductions by Lipton every sub-program (including individual instructions, procedures and the program itself) can be said to have any of the following atomicities: - **Const**: The evaluation of the piece of code does not depend on or change any state. - **Mover**: As defined by Lipton, the evaluation is both a left and a right mover. - **Atomic**: The whole evaluation can be assumed to be happening atomically. - **Cmpd**: The evaluation is a sequence of steps that are disjoint and interleaving with other processes may change the effects of this evaluation. - **Error**: The evaluation violates some locking principle of the program. This is the atomicity we are trying to eliminate from our program. These atomicities follow the sub typing relationship \[ \text{Const} <: \text{Mover} <: \text{Atomic} <: \text{Cmpd} <: \text{Error} \] For example, if a statement is of type Const, then it definitely is also Mover, since it can be translated in either direction without having any effect on the result of this or other threads. Once the atomicities of individual program statements is known, the atomicities for larger blocks of code can be inferred. For example, a code block with all instructions of atomicity Mover is itself a Mover (since all instructions may translate left as well as right, and hence the whole code block can translate as well) while a code block of Atomic instructions is Cmpd (since another process interleaving between two atomic sub-programs can affect the execution of the overall program) etc. The process of thus combining atomicities of consecutive code blocks to get the overall atomicity is called **sequential composition** and is denoted by the operator \(\); \[ mover \; ; \; \text{atomic} = \text{atomic etc.} \] Again, the non deterministic choice between executing two statements of choice \(\alpha_1\) and \(\alpha_2\) is \[ \alpha_1 \downarrow \alpha_2 \] called the join of \(\alpha_1\) and \(\alpha_2\) These atomicities are the base types in this type system. To account for conditional statements where either one of the two execution paths may be taken, conditional atomicities have to be introduced. \[ l ? T_1 : T_2 \] means that if the lock \( l \) is then type is \( T_1 \) else \( T_2 \). Our annotated program now consists of these atomicity tags added to the ones discussed above. Hence, every method must be declared to have one of these atomicities. **The Type Checker** The ideas presented above for a developing a type system extend directly to the added task of inferring atomicity types for the statements and code blocks. The primitive instructions are assigned either of the basic atomicities. The atomicities for the procedures are inferred using the rules of the kind shown before: \[ \text{[exp while]} \] \[ P; E \vdash e_1 : \text{int} & a_1 \] \[ P; E \vdash e_2 : t & a_2 \] \[ P; E \vdash \text{while} e_1 e_2 : \text{int} & (a_1; (a_2; a_1)^*) \] states that while \( e_1 \) \( e_2 \) can be inferred from the conditions given and if \( a_1 \) and \( a_2 \) are the atomicities inferred for the statements above then the atomicity of the while loop is given by \((a_1; (a_2; a_1)^*)\). The inference rules developed inculcate both intuition behind the basic atomicity types for primitive statements and the composition of these atomicities in the program. ### 3.1.3 Type Inference In the approach discussed above, the onus of annotating a program in order to be able to argue about its type safety lies largely on the programmer. An intriguing next step in developing the suggested type system would be to be able to infer consistent atomicity types for the program procedures given an unannotated / partially annotated program[?]. We approach this problem by extracting a system of constraints from the program that should be satisfied by any assignments to the unknown atomicities. We then solve this system of constraints, i.e., we find an assignment of atomicities to the unknown methods such that all the constraints are satisfied, proposing a simple algorithm with this aim. To first come up with the necessary constraints, we extend the type system developed so far a little further. To accept methods with unknown atomicities, we introduce *atomicity variables* that are placeholders for atomicity types inferred from the analysis. Our new type inference system now consists of rules of the form: \[ P; E \vdash e : t \ d \ C \] where \( C \) is the set of constraints inferred in the type inference for the statement. For example, let \[ A \\ B \\ D \] i.e. expression D be inferred from the expressions A and B. and if this inference requires the atomicity type \( \alpha_A \) of A to be subtype of the atomicity of B \( \alpha_B \), then D can be inferred with the constraint \( \alpha_A \sqsubseteq \alpha_B \). The constraints thus obtained accumulate during the constraint generation phase. Once the whole program has been type checked, the generated constraints are solved to get a satisfying assignment. Due to the form in which the constraints are generated and propagated in the type rules, all obtained constraints are of the form \( \alpha_A \sqsubseteq d \), where \( \alpha_A \) is an expression containing atomicity variables and \( d \) is a closed (known) atomicity. A satisfying assignment is one that satisfies all these subtyping constraints in the constraint set \( C \). To find a satisfying assignment, we start by assigning the lowest possible assignment of atomicity \( Const \) to all variables and then expand the assignments incrementally by the least possible amount till we reach a fix point. The methods discussed above have been implemented for the full Java language and tested on non trivial programs. The results were encouraging with many known and unknown violations detections in the tested Java libraries.[8][9]. 3.2 Model Checking The second set of methods we discuss are based on model checking the given program. Model checking is traditionally done by capturing the relevant properties from the given program in the form of an abstract model drawn from the program and then exhaustively reasoning about the required properties on this model. Typical model checkers for sequential programs exploit the fact that primitive data types in most languages are of finite range. This coupled with finite execution paths of the program text means that there are only finitely many states that the program can be in (where a state is defined as a particular combination of the program execution and the current data). This reduces model checking to an exhaustive check over a finite (albeit large) state space. The moment we introduce recursive function definitions, as found in most common languages today, the state space becomes unbounded. In parallel programs, programs do not have a bound on the number of threads that may be running at a given moment, and any possible combination of states for each of these many threads forms one state of the whole system, exploding the state space further. Even with a finite bound on the number of thread spawned at any given time, the interleaving of these threads significantly increases the complexity of model checking. The first two approaches discussed below reduce the problem of modeling concurrent programs to that of modeling a derived sequential program along with some extra work to ensure that this modeling is consistent with the original concurrent program. The last approach actually model concurrent programs laying stress on ways to optimize search and cover the maximum state space possible. 3.2.1 Method View Consistency The first approach we consider is a model checking algorithm that runs on an abstract model derived from the compile time internal representation of the Java program. To tackle the concurrency issue, we define method consistency and check that our model has this particular property. The following model checking algorithm developed by Praun & Gross\cite{praun99} is intended to verify a given property for Object Oriented Programs written in Java keeping in view the issues raised by multiple threads running in parallel. The algorithm discussed here is \textit{neither sound nor complete}, i.e., it is possible that it misses out some instances of concurrency violations while it is also possible that some of the alarms raised are false alarms. The algorithm runs on HSG: The abstract model we choose for our algorithm is drawn from the \textit{Heap Shape Graph (HSG)} that the Java memory model creates during compile time. The HSG has static nodes corresponding to each of the classes defined in the program and during runtime, any instantiation of a class is carried out by copying out an instance from the model in HSG. For our analysis, we work directly on the HSG and assume that any two instances created for a class are one and the same and two threads working on data structures in these two instances actually interfere. Thus, our analysis is on the conservative side assuming that maximum possible interference takes place. It is found that this conservative approach does not really increase the number of false alarms significantly since most threads that work on shared data do work on the same class instance in typical programs. \textbf{Method Consistency:} The modeling of concurrent access to methods is handled in this approach through the idea of method consistency. First a couple of definitions are called for. \textit{Lock View} is a set of \textlangle variable, access\textrangle pairs that model the variables accessed and the type of access by a lock t at runtime. Access may be read(r) of update(u). The set of lock views of thread t is specified as \textit{L}_t = l_1, l_2, l_3, \ldots l_k. \textit{Method View} is a set of \textlangle variable, access\textrangle pairs that model the variables accessed during the method call at runtime. Here both read and write access are added to the set and the set of Method views of a thread t is \textit{M}_t = m_1, m_2, m_3, \ldots m_k. Two views are said to overlap if their intersection is non empty, i.e. some variable is accessed in the same way in these two views. A set of V of views form a chain with respect to a view v’ if their intersections with v’ form a collection of increasing sets, i.e. : \[ u \in V \land v \in V \implies u \cap V \subset v \cap V \forall v \cap V \subset u \cap V \] A program is said to be method consistent if, for all the methods in the program, the overlapping lock views form a chain. What this means is that given any method of the program, for any variable access within the method, the locks held must be nested. This idea is made clear by the picture above. Method consistency correctly captures a very common reason for violation of atomicity where a variable is read in a method under one lock and then written back under another lock. The possibility of a stale value of the variable being written back leads to inconsistency. Method consistency ensures that such stale values will be caught during model checking. **Model Checking:** As described above, the model checker does an exhaustive search on the HSG model of the Java program. During this search, the model checker assumes the program to be sequential and then aims to guarantee atomicity by checking for method consistency in relevant methods. To begin with, methods from different classes can not interfere because in OOP the only data members visible are those within the classes. Even within a class, methods that are not synchronized are pruned from the search. The sets of relevant methods in each class are then checked for method consistency by evaluating the view overlap of all locks with all methods and checking for the chain property. Thus, this model checker aims at verifying the given program through an exhaustive model search. To contain the state space explosion seen with concurrent programs, the model checker verifies the program assuming that it is the only program running (that is, running in sequential mode). The model checker then guarantees that errors due to concurrency do not arise by checking relevant methods to be method consistent. This is a simple and effective approach to model checking, but it is only an approximation to verification and can not guarantee good results. ### 3.2.2 KISS - Keep It Simple and Sequential This approach with the vogue name works by reducing the given concurrent program to a sequential one that models in some way the runtime behavior of the original program. This new sequential program can then be model checked by any existing model checker (here SLAM) for errors that capture errors arising from concurrency issues as well as other sequential program errors. It is important to note that the sequential program obtained does not model all possible executions of the original concurrent program but only a small subset of the executions. It is hoped that any problems arising from concurrency issues will manifest themselves in the restricted form of concurrency modeled by this approach. Program transformation: We model the execution of the concurrent program by introducing a non deterministic scheduler that may do one of the following at any point in the execution of the original program: 1. Start a new thread evoked by the concurrent program asynchronously in the past. 2. Terminate an existing thread. To implement this non deterministic scheduler, instrumentation code is added after every line of code in the original program. The new program obtained has a single stack (sequential program). This single stack stores the context of all the thread that are currently “active”, i.e. have been started by the scheduler some time in the past but have not terminated yet, in contiguous blocks. If the “scheduler” decides to terminate the currently running process, it sets a local variable “raise” to true and returns at that point. The variable raise leads to an immediate return by all functions down the call stack of the current process, there by causing an “immediate” return by the process. The process below the current process in the call stack then continues its execution from the point where the terminated process was started. In order to schedule a new process and in order to do this activity in a bounded fashion, we introduce a global array $ts$ that keeps count of all scheduled processes. Whenever the concurrent program evokes a function asynchronously, the function is added to $ts$ if $ts$ is not already full. The scheduler may now decide to evoke this function from $ts$ any time in the future non deterministically, meaning that another context is created on top of the stack and the new call stack resides here. If $ts$ happens to be full, then the asynchronous call is converted to a synchronous call and a new thread is created immediately. The scheduler and race detection code: When we say that the scheduler decides to start a scheduled thread, we mean that the a part of the instrumentation code added in between the lines of the original program makes this choice. This scheduler is implemented on an abstract level by a function $schedule()$ that is called after every line. It goes through the $ts$ array and non deterministically decides to run any of the threads. Code displaying such non deterministic behavior can be efficiently checked by sequential model checkers like SLAM. Also, instrumentation code contains a choice between RAISE and NULL where the former leads to immediate termination of the running thread while the later does nothing. Finally, this instrumentation code must also contain code that checks that all the required assertions at a point are met and that no race condition exists. A possible way to check race condition is using functions like: Figure 2: A pictorial view of the single stack program. Different ‘active processes’ reside in contiguous blocks on the stack | check_r(x){ | check_w(x){ | | if (x == &r){ | if (x == &r){ | | assert (~written(r)); | assert (~read(r) && ~written(r)); | | read(r); | written(r); | | } | } | These functions check that beyond aliasing, no accessed variable has been written/read since the last step execution of the current process. Finally, it is instructive to argue about the state space covered by this approach. Clearly, it does not exhaustively try out all possible execution paths of the concurrent program. We note that the only source of concurrency modeling in the resulting program are through the synchronous or asynchronous calls executed by the scheduler code and the end of program terminations or RAISE terminations. Hence, any thread that starts executing once goes into the stack and many more threads may then execute preemptively before the termination of this thread, but once this thread terminates, it never restarts. This is precisely the types of interleaving that this procedure models: A stack based interleaving where every new thread is added to a stack and popped out on normal or preemptive termination. Any number of threads may be stacked on top of this thread during this time, giving rise to heavy concurrency even in this restricted domain. Also note that the length of the array ts determines the number of possible outstanding threads at any time, hence acting as a knob on the extent of concurrency exhibited by the model. 3.2.3 Iterative Context Bounding We now move on to actually stepping through a model that captures all possible execution paths of a concurrent program. The approach developed by Musuvathi and Qadeer[?] simulates the execution of the program itself, without abstracting away from the source code while following a systematic protocol to be able to guarantee the correctness of program to some measurable metric. In order to tackle the problem of infinite state space in model checking, the idea of depth-bounding is often used. Depth-bound model checkers guarantee the correctness of systems for the first \( m \) number of steps, \( m \) being the depth of the search. This idea was developed for and is very effective for modeling message queues where any protocol violation that may happen is likely to occur within the first few messages exchanged as long as all possible orderings of this exchange are captured. The same idea when extended to concurrent program verification, though, gives unsatisfactory results because programming errors may be situated anywhere in the program code and exhaustively verifying the first few lines of code does not guarantee program correctness on the average. The complimentary idea that applies well here is that of context bounding. Context bound is an upper limit on the number of context switches that occur in the execution of the program. We define a context switch as a preemptive replacement of a running thread by some other thread (it does not include the change of context when a thread yields or blocks on a call). Then, a context bounded search tries out all possible executions of a program with no more than \( m \) context switches where \( m \) is the search depth. Iterative Context Bounded search starts with a bound of zero, i.e. a program execution without any context switches and verifies that program is correct when assumed to run independently. Then, it iteratively increases the bound, exhausting out all possibilities of the current bound before moving to a higher one. This process should ideally go on till infinity to cover all possible execution paths but practically only a small number of context switches are modeled and required to cover significant part of the state space. Let us pay attention to a few important attributes of the search explained above. First, with a context bound of zero, the modeling guarantees the correctness of program running independently. Even with this trivial bound, the results obtained are meaningful! Secondly, Context Bounded search gives us a very intuitive metric on the extent of program correctness assured at the end of modeling. A search up to bound \( m \) means that no errors occur up to a maximum of \( m \) context switches during the execution of program, irrespective of how and where these switches occur, since the state space for each bound is covered exhaustively. If it can be shown that modeling software execution with a bounded number of context switches faithfully simulates the actual execution of programs, then this modeling can prove to be a very efficient way of verifying correctness under parallelism, as is found to be the case. It was found that most common bugs in concurrent software manifest themselves in as few as 3 context switches, and if up to 8 context switches are allowed then almost all bugs can be found. Of course this search is not exhaustive and not all bugs can be detected by this approach as the number of switches has to be kept small to contain the state space. The number of states visited can be shown to equal \[ n^kC_c.(nb + c)! \] where \( n \) is the number of threads, \( k \) is the length of program code for each thread, \( b \) is the number of non preemptive context switches and \( c \) is the number of preemptions. \( n \) and \( b \) are typically small and as long as \( c \) is kept small this bound is a polynomial bound on the length of code. Hence, keeping the number of preemptions small reduces the normally exponential bound to a polynomial bound on the number of states. An optimization in the search is possible by limiting the points at which the said context switch occurs. Initially, we assumed that a context switch can occur at any point in the program but it can be shown that it is enough to schedule context switches preceding synchronization operations in the program provided that race detection is done otherwise through other methods. Specifically, it can be shown that, * A race free terminating execution of a program $\alpha$ is equivalent to another race free terminating execution $\beta$ where all context switches occur before synchronized actions with no more context switches than the original execution $\alpha$. In view of this result, the total number of states that need to be visited decreases further giving us a fairly efficient approach to modeling large concurrent systems. As the approach described above relies heavily on the claim that a small number of context switches effectively model real program concurrency problems, it is mandatory to validate these results with actual implementation to program verification. The initial results stated in [?] are encouraging. An algorithm developed on this philosophy was tested with a number of concurrent software systems and it was found to detect a number of known and previously unknown bugs within a small context bound of 2 or 3. Also, it was found that almost all of the state space is covered (in terms of the number of distinct states visited) with a bound of 8 to 11. In initial testing, this method was found to work remarkably better than other contemporary methods of depth bound search and depth first search. The algorithm discussed above does seem to cover large tracts of the state space effectively. It is imperative though for the reader to keep in mind that it merely provides a systematic approach to program testing. Like other model checking methods it does not guarantee program correctness but provides us with a metric that objectively ensures correct execution of programs in a large number of cases and weans out most common bugs effectively. 4 Conclusion Programming Language Principles is a field of computer science where one sees theory and practical implementation come close. We notice a similar situation in the course of this seminar. Beginning with the motivation behind program verification that for some stems entirely from the lure of finding structures in the different programming languages we use, and their effect on the correctness or effectiveness of the exam while for others is a simply he need of the day, to the approaches that have developed to tackle this problem, we find a series of magnificent insights from theory being brought out to a practically implementable level. We see that the earliest work in the field was largely theoretical in nature. This earlier work helped in clearly defining the framework on which the subject builds and exploring the implications and limitations of the newly conceptualized ideas. This early work provided a robust base for the subject to build on. It was soon realized that the formal concepts need to be diluted in order to build meaningful verification systems. The methods of Owick & Gries[?] never got used to prove any non trivial software systems. This formal methods were found to be too cumbersome for direct application. The first strain of the approaches we discussed simulated the original ideas the closest. The way these ideas were assimilated to come up with a type system to statically type check concurrent programs is amazing. Formalizing all necessary notions for concurrent safe programs, such type systems have opened a new window for system verification softwares, and also for a new programming paradigm that pays due heed to ensuring well written programs in parallel environments. It provides a means not only to verify programs but to switch to programming techniques that reduce inherent errors. The other fork of verification software we saw are the more usable and current verification approaches, that systemize testing by modelling and extend the most intuitive approach to program safety a step further. Because these modelling software closely model the way most programmers are used to testing their programs, they go down well in the programming community. By providing a plethora of well implemented software verifiers, these model checking programs have catered to the current need of the industry. It is nice to note that all these approaches proved very intuitive and effective methods to control state space explosion, being simple (and hence!) efficient at the same time. Finally, it is apt to note that the field of concurrent software verification is still in its youth. A lot of active research is going on and a lot of work is being done in the two fields discussed here as well as transactional memory. One hopes that along with tools to verify programs that make software systems safer and more reliable, the insights gained will help POPL to come up with language models tailored to the parallel environment and also the programmers to inculcate better coding habits leading to cleaner code: an utopian goal of a lot of CS research! Acknowledgements I would like to thank my guide, Prof. Supratik Chakraborty for introducing me to this wonderful topic and for his guidance therein. The experience has been enthralling. References [1] Peter H. Roosen-Runge. Why do we need software verification tools?
{"Source-Url": "http://pages.cs.wisc.edu/~pprabhu/projects/ugseminar.pdf", "len_cl100k_base": 10315, "olmocr-version": "0.1.53", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 45950, "total-output-tokens": 12021, "length": "2e13", "weborganizer": {"__label__adult": 0.0004131793975830078, "__label__art_design": 0.0002980232238769531, "__label__crime_law": 0.0004162788391113281, "__label__education_jobs": 0.0011663436889648438, "__label__entertainment": 6.127357482910156e-05, "__label__fashion_beauty": 0.00015938282012939453, "__label__finance_business": 0.00013244152069091797, "__label__food_dining": 0.00041961669921875, "__label__games": 0.0007638931274414062, "__label__hardware": 0.00077056884765625, "__label__health": 0.0005736351013183594, "__label__history": 0.00023317337036132812, "__label__home_hobbies": 9.506940841674803e-05, "__label__industrial": 0.0003597736358642578, "__label__literature": 0.0002980232238769531, "__label__politics": 0.0002627372741699219, "__label__religion": 0.0005426406860351562, "__label__science_tech": 0.010772705078125, "__label__social_life": 0.00010073184967041016, "__label__software": 0.00316619873046875, "__label__software_dev": 0.9775390625, "__label__sports_fitness": 0.0004677772521972656, "__label__transportation": 0.0006232261657714844, "__label__travel": 0.00022804737091064453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52962, 0.01746]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52962, 0.39088]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52962, 0.93171]], "google_gemma-3-12b-it_contains_pii": [[0, 342, false], [342, 1187, null], [1187, 1572, null], [1572, 5171, null], [5171, 6750, null], [6750, 10492, null], [10492, 13732, null], [13732, 15044, null], [15044, 18095, null], [18095, 20616, null], [20616, 23437, null], [23437, 26045, null], [26045, 28264, null], [28264, 30936, null], [30936, 34321, null], [34321, 36468, null], [36468, 39718, null], [39718, 41499, null], [41499, 45345, null], [45345, 47427, null], [47427, 50722, null], [50722, 52962, null]], "google_gemma-3-12b-it_is_public_document": [[0, 342, true], [342, 1187, null], [1187, 1572, null], [1572, 5171, null], [5171, 6750, null], [6750, 10492, null], [10492, 13732, null], [13732, 15044, null], [15044, 18095, null], [18095, 20616, null], [20616, 23437, null], [23437, 26045, null], [26045, 28264, null], [28264, 30936, null], [30936, 34321, null], [34321, 36468, null], [36468, 39718, null], [39718, 41499, null], [41499, 45345, null], [45345, 47427, null], [47427, 50722, null], [50722, 52962, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52962, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52962, null]], "pdf_page_numbers": [[0, 342, 1], [342, 1187, 2], [1187, 1572, 3], [1572, 5171, 4], [5171, 6750, 5], [6750, 10492, 6], [10492, 13732, 7], [13732, 15044, 8], [15044, 18095, 9], [18095, 20616, 10], [20616, 23437, 11], [23437, 26045, 12], [26045, 28264, 13], [28264, 30936, 14], [30936, 34321, 15], [34321, 36468, 16], [36468, 39718, 17], [39718, 41499, 18], [41499, 45345, 19], [45345, 47427, 20], [47427, 50722, 21], [50722, 52962, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52962, 0.02024]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
667e6c756e8d6e114ae6e7d06b363ab90cfe1a9b
Exploiting Memory Corruption Vulnerabilities in the Java Runtime Prepared By: Joshua J. Drake December 15, 2011 Revision: 0.9 ## Revision History <table> <thead> <tr> <th>Version</th> <th>Date</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>1.0</td> <td>12/15/2011</td> <td>Initial document published.</td> </tr> </tbody> </table> Table of Contents REVISION HISTORY .......................................................................................................................... 2 TABLE OF CONTENTS .................................................................................................................... 3 INTRODUCTION ................................................................................................................................. 4 Scope ................................................................................................................................................... 4 BACKGROUND .................................................................................................................................... 5 DISTRIBUTION..................................................................................................................................... 5 HISTORY............................................................................................................................................... 5 Update History ............................................................................................................................... 5 ATTACK SURFACE ............................................................................................................................. 6 DESIGN .................................................................................................................................................. 7 Security Model ................................................................................................................................... 7 Process Architecture ........................................................................................................................ 7 Exploit Mitigation Support ............................................................................................................ 7 INTERNALS .......................................................................................................................................... 8 Java Virtual Machine ....................................................................................................................... 9 Heap Specifics .................................................................................................................................. 9 COMMON CHALLENGES..................................................................................................................... 10 DEBUGGING ...................................................................................................................................... 10 Spurious Access Violations ........................................................................................................... 11 ENCODING CONVERSION ............................................................................................................. 12 INTEGER SIGNEDNESS .................................................................................................................. 13 CODE REACHABILITY ..................................................................................................................... 13 EXPOITATION TECHNIQUES ............................................................................................................. 14 CONTRIVED EXAMPLES ................................................................................................................ 14 Arbitrary Call ................................................................................................................................ 14 Arbitrary Write .............................................................................................................................. 14 Format Strings .............................................................................................................................. 15 Stack Buffer Overflow .................................................................................................................. 15 Heap Buffer Overflow ................................................................................................................. 16 REAL-WORLD EXPLOITS .................................................................................................................... 17 CVE-2009-3869 ............................................................................................................................. 17 CVE-2010-3552 ............................................................................................................................. 17 CONCLUSION ........................................................................................................................................ 18 RECOMMENDATIONS ..................................................................................................................... 18 FUTURE WORK ............................................................................................................................... 18 BIBLIOGRAPHY ................................................................................................................................... 19 ABOUT ................................................................................................................................................ 20 THE AUTHOR ................................................................................................................................. 20 ACCUVANT LABS ............................................................................................................................ 20 Introduction The Oracle Java Runtime Environment (JRE) is one of the most widely deployed software packages. In a 2010 survey, JRE was found installed on 89% of end-user computer systems (Secunia). While installing Java, Oracle displays the message “3 Billion Devices Run Java” on a splash screen. Pervasive deployment is a key property that attracts vulnerability researchers and attackers hunting for bugs. Unsurprisingly, Java is often employed by attackers to compromise computer systems. Many developers choose Java as a way of avoiding making mistakes that lead to memory corruption. Although this reasoning is sound, installing a JRE on a computer system exposes it to a significant amount of risk. JRE is plagued by a long history of security problems, including vulnerabilities in its components built from native code. Based on trending, it is safe to assume that many more vulnerabilities remain to be found. Although proliferating exploitation techniques can be controversial, it is an important area of research that should be conducted openly. The existence of a working exploit for a particular vulnerability removes the ambiguity of whether or not it could actually be exploited. Furthermore, working exploits allow administrators to unequivocally measure risk in their own environment. In short, developers, administrators, and vendors alike all take vulnerabilities more seriously when they have been proven exploitable. Exploits increase prioritization and decrease time to patch. The research presented in this document was conducted and compiled in an effort to increase public knowledge about exploiting vulnerabilities in JRE’s compiled native code. Information presented includes relevant design, architecture, and implementation details. Additionally, various difficulties and solutions encountered during exploit development sessions are documented. Finally, example code and tools accompany this paper in hopes they will prove useful when developing exploits for Java memory corruption vulnerabilities in the future. Scope During planning, several decisions were made in order to limit the scope of this research due to time constraints. First, version 6 of Oracle’s Java Standard Edition (J2SE) was selected. Next, a decision was made to conduct all testing on a 32-bit Windows 7 SP1 machine. Very little time has been spent researching JRE on other supported platforms or architectures. However, some effort has been expended to extend this research to JRE version 7. Although vulnerabilities which exist purely in the Java-language portions of JRE are interesting, this paper does not cover such issues. The biggest advantage is that the task of porting such an exploit to another platform may require little-to-no effort. That said, such bugs are often classified as Java-specific and must be exploited in very specific ways. Limiting research to native code vulnerabilities allows leveraging a plethora of general exploitation techniques. Background In order to develop reliable memory corruption exploits for any application, knowledge is paramount. More understanding about the internals of a target application translates to increased development efficiency, exploit reliability, and elegance. The following sub-sections aim to provide insight into the distribution, history, attack surface, design, and application-specific implementation internals of JRE. Distribution Oracle’s JRE is available in three “editions” which are supported on a wide variety of platforms and architectures (Oracle). Micro Edition, which is typically used on embedded devices like mobile phones and set-top boxes, is not distributed in binary form. Standard Edition installers are available for Solaris, Windows, and Linux for x86 and x86_64 (Oracle). Solaris on SPARC is also supported. Enterprise Edition installers are available for Windows and UNIX. The Enterprise Edition depends on the Standard Edition JRE or Java Development Kit (JDK) being installed. The most likely edition to be found on general purpose computers is the Standard Edition of the Oracle JRE. This JRE is also the most commonly bundled with third-party applications that require a Java Runtime. Apple’s Mac OS X used to include the Standard Edition JRE by default. However, in recent versions, Apple has chosen to take a more aggressive approach with respect to Java. With the release of Lion, JRE is no longer installed by default (Kessler). Furthermore, if the user later installs Java from the command line, the browser functionality still will not be enabled (Ip). History Over the last half a decade, Java has been taken advantage of by numerous attackers to compromise computer systems. Out of fifteen popular exploit kits, 73% have at least one exploit that targets Java. Of those, 46% include more than one (Guido). Apart from malware exploit kits, Metasploit contains eleven exploits that target Java. From those eleven, only three exploit memory corruption vulnerabilities. Even though few known exploits depend on memory corruption bugs, a large number of such bugs in Java have been publicly disclosed. Update History Over the course of the five years that JRE version 6 has been available, there have been 29 updates. Well over 100 CVE numbers have been assigned to security vulnerabilities fixed within these updates. Surprisingly, only two updates contain changes that significantly impact exploit development. The first such set of changes, and perhaps the most important, came with the release of JRE 6 Update 10. The second set of changes came with the release of JRE 6 Update 18. Update 10 In Update 10, Sun introduced a new installation method for Windows installers (Oracle). Prior to this change, installing an update would leave the user with multiple versions of JRE 6 installed. This installation method is called “static configuration”. In the new method, which is called “patch-in-place”, installing an update will instead replace the currently installed JRE 6. From a security point-of-view, this is excellent since it means old, vulnerable versions of JRE will not persist. Fortunately, this method became the default method for future updates. Also in Update 10, Sun introduced a new “Next Generation” browser plug-in (Oracle). The new plug-in was the most important change in this update. Two of the new features within the new plug-in stand out. One feature pertains to the way Java executes applets. More information about how this feature impacts exploit development can be found in the “Process Architecture” section below. The other feature that stands out allows a web site to control parameters passed to the JRE. One such parameter is “java_version”. This parameter allows an attacker to select the specific version of JRE that should be used to execute the applet. Thankfully, specifying versions older than the currently installed version prompts the user. More detailed information regarding handling this parameter is available in a section of release notes (Oracle). Another controllable parameter is “java_arguments”, which allows passing command line arguments to the Java interpreter. Although allowed arguments are limited to a “secure set”, CVE-2010-1423 allowed remote code execution due to improper handling of such arguments. Parameters that are within the “safe set” include the heap size and various rendering options. Update 18 The second update event that affected exploit development was JRE 6 Update 18. This release removed the executable flag from the Java Object Heap memory region permissions. Before, pages in this region were readable, writable, and executable. Exploit developers could no longer execute code directly in this region. Java 7 On July 28th, 2011, Oracle released Java 7. Unfortunately, the initial release contained a nasty bug that scared off many early adopters. Specifically, loop optimizations which were enabled by default would cause incorrect execution or crashes (Waters). Like previous major releases of Java, version 7 is unlikely to be offered as an update to deployments of JRE versions 6. For this reason, wide-spread adoption is likely to take many years. Adoption rates aside, Java 7 now takes advantage of nearly a decade of security mitigation technologies. By merely switching to version 10 of the Microsoft Visual C compiler, Oracle has significantly raised the bar for exploiting memory corruption vulnerabilities within Java 7. Attack Surface Outside of the web browser, the typical use case for Java provides very little attack surface. The other Java invocation method is by way of Windows file associations. In this scenario the application gets executed with full user privileges by default. An attack using this vector would be classified as a Trojan horse attack, similar to sending a file with an “exe” extension. In the browser, most attacks rely on the Java browser plug-in being installed and enabled. This plug-in is installed and enabled by default when using the Windows J2SE installers. There are several attack surfaces exposed by the browser plug-in, but the most common attacks involve a malicious Applet. In fact, ten out of the eleven Java exploits in Metasploit use Applets. There are several reasons why most attacks use Applets. In general, this method is the lowest barrier to entry to reach the plethora of code within the JRE. Figure 2 shows the sheer number of components within the JRE. Many of these include portions of both Java code and native code. When this is the case, the Java code will contain native methods that it will call into as needed. Since an attacker controls all of the data and code that comprises an Applet, they can supply Java code to call such methods. Apart from Applets, “LiveConnect” and JNLP are two other technologies that depend on the browser plug-in. “LiveConnect” is the interface that bridges the gap between JavaScript in the browser and a Java Applet. Java Network Launch Protocol (JNLP) is used by Java Web Start (JWS) as well as the browser plug-in to descriptor applications and Applets, respectively. No in-depth research has been conducted into these attack surfaces at this time, but they are considered a target for future work. **Design** Choices made during the development and packaging processes can have lasting effects on the security posture of an application. Many such decisions were made during JRE development. A few of the more important selections are detailed here. The security model, process architecture, and level of support for exploit mitigation technologies are covered in this section. **Security Model** When Applets and Java Web Start applications are executed, the JRE checks the containing JAR archive for a digital signature. In the event a digital signature is found, JRE tries to determine if the signature is from a trusted party. If it is not, the user will be asked whether or not they wish to trust the signing party. If the signature is trusted, the application execution is permitted. The “java_signed_applet” exploit within Metasploit uses an Applet of this type. Vulnerabilities in native code are not necessary since the application gets executed with full user privileges. Applications without a digital signature will run without any prompting by default. When a user visits a web site that presents an unsigned Applet, Java will automatically download and begin executing it. However, the code will be subject to a “sandbox”. Unlike the sandboxes used by Chrome, Office, and Adobe Reader, this sandbox doesn’t utilize and OS level hardening features to enforce its boundaries. Instead, Java relies only on a “SecurityManager” class to define what operations Applets are allowed to perform. Despite the restrictions imposed within a sandboxed application, there is still a great deal of reachable native code. This includes code that parses images, sounds, compressed data, and more. JRE even embeds old versions of various open-source libraries like zlib, libpng, and libjpeg. Exploiting a vulnerability in native code allows an attacker to bypass the sandbox and execute code with full user privileges. **Process Architecture** When exploiting software, it is helpful to be familiar with the high-level design for the target application. Things to consider are the process architecture, dependencies, and integration points. When a Java Applet is encountered on a web site the browser plug-in handles downloading the necessary files and passing them to the JRE. This plug-in, including several libraries that it depends on, is loaded into the web browser’s address space. Figure 3 below shows the process hierarchy with an applet loaded for each of the three most popular browsers. As of Update 10, the execution of the Java application is done by executing Java.exe as an external process. Using this design, Java applets execute in a separate address space from the browser. This means it is not possible to use traditional browser-based JavaScript heap spray libraries like “heaplib” to exploit JRE bugs (Sotirov). Since an attacker controls all applet code, it is still possible to conduct a heap spray via Java code (Dowd). It may also be possible to conduct heap spraying via “LiveConnect”, though this remains an area for future work. **Exploit Mitigation Support** Developing exploits in modern times requires <table> <thead> <tr> <th>Trusted</th> <th>Untrusted</th> </tr> </thead> <tbody> <tr> <td>Signed</td> <td>Unsigned</td> </tr> <tr> <td>Runs with full user privileges</td> <td>Subject to Java “sandbox”</td> </tr> <tr> <td>User is Prompted</td> <td>No prompting</td> </tr> </tbody> </table> Figure 3: In-Browser Process Architecture deep understanding of a multitude of exploit mitigation technologies. Enabling mitigations can mean the difference between a particular issue being exploitable or not. In order to enable them, special steps may need to be taken at compile time or changes made within the operating system configuration. Unfortunately, JRE 6 takes advantage of very few exploit mitigation technologies. Since the Windows JRE 6 is compiled with a very old version (7.1) of the Visual C compiler, the state-of-the-art in default-enabled exploit mitigations does not apply. In fact, stack cookies (/GS) and Safe Structured Exception Handlers (/SafeSEH) are the only security relevant mitigations available in this compiler. These two mitigations are very specific to stack-based buffer overflows. They are ineffective for certain types of memory corruption like Use-After-Free (UAF) and out of bounds array indexing. As seen in Figure 3 above, Data Execution Prevention (DEP) and Address Space Layout Randomization (ASLR) are not enabled for the processes created by the JRE. When used together, these two mitigations can make exploiting memory corruption vulnerabilities significantly more difficult. Using only one or the other only marginally increases the level of effort needed. Though Java does not opt-in to DEP, it is possible to forcibly enable DEP in Windows by using the "AlwaysOn" or "OptOut" setting. If this is done, a Return Oriented Programming (ROP) payload may be required to successfully exploit memory corruption vulnerabilities. Due to the lack of ASLR, constructing such a ROP payload is relatively straightforward. **msvcr71.dll** In October of 2010, a ROP chain was created in order to craft the "java_docbase_bof" exploit in Metasploit. This ROP chain was based on "msvcr71.dll", which is installed with JRE 6. At that time, it was not understood that the exact same version of "msvcr71.dll" is shipped with all releases of JRE 6. This DLL, identified by version number "7.1.0.3052.4" and MD5 86f1895ae8c5e8b17d99ece768a70732, does not opt-in to ASLR or DEP. It is loaded within all components of Java, which includes the browser plug-in, "jp2launcher", and "java.exe". In the first half of 2011, the White Phosphorus (WP) exploit pack team developed a ROP chain based on this DLL which they named “Sayonara” (White Phosphorus Exploit Pack). The release of this chain was interesting primarily for two reasons. First, the authors described how this DLL is actually distributed with many applications, including all versions of JRE 6. Second, the chain was very short for what it accomplished. It does this by utilizing a “pusha” setup a multi-stage return sequence. Ultimately, the remaining gadgets end up executing the data immediately after it. A month later, Peter Van Eeckhoutte of the Corelan Security Team used his "mona.py" tool to automatically create another chain based on “msvcr71.dll” (Van Eeckhoutte, Universal DEP/ASLR bypass with msvcr71.dll and mona.py). Since that time, he has made several revisions reducing the chains size. The current version lives inside Corelan’s ROPdb and is 16 bytes shorter than the White Phosphorus version as of this writing (Van Eeckhoutte, Corelan ROPdb). These public ROP chains significantly simplify exploiting vulnerabilities on systems with DEP enabled and JRE 6 installed. This applies to bugs within Java as well as vulnerabilities in any other code loaded into the browser’s address space. Several publicly released exploits use this method to bypass DEP. **Java 7** With the release of Java 7, Oracle improved exploit mitigation support in the JRE. The entire code base has been compiled with Microsoft’s Visual C 10 compiler. Apart from enabling ASLR and DEP by default, this compiler also includes other improvements such as better stack cookie heuristics. All modules that are part of Java 7 opt-in to ASLR and DEP. This significantly raises the bar with respect to exploiting memory corruption vulnerabilities. **Internals** Familiarity with the internals of a target application should be something that an exploit developer constantly strives towards. Knowing data structures and how things fit together can make a huge difference when faced with a challenging crash. This section aims to serve as an introduction and overview to internals that have proven useful when developing exploits targeting the JRE. Java Virtual Machine At the core of the JRE lies the Java Virtual Machine (JVM). The JVM is what is ultimately responsible for executing Java byte-code. There are many JVM implementations, but the JVM used by Oracle Java is called “HotSpot”. The low-level functionality within HotSpot is written in C++ for performance reasons. Figure 4 depicts a flow chart that shows the steps involved in executing Java source on the underlying hardware in native code. First, the Java source code is compiled into byte-code. This is usually done at development time with the resulting byte-code being distributed to end users. Next the byte-code is either interpreted or Just-In-Time (JIT) compiled at runtime. When the JIT region is allocated, it is allocated with readable, writable and executable permissions. Even if the byte-code is simple interpreted, it is ultimately native code that executes on the underlying hardware. Heap Specifics In order to keep track of all the data involved, JVM uses two kinds of heaps; the Java Object Heap and the native heap. The two heaps are used for different reasons. The Java Object Heap is used to track Java Objects. The native heap, on the other hand, is primarily used by underlying native code within native methods or the JVM itself. The native heap is implemented in msvcr71.dll via the malloc, realloc, and free functions. It eventually calls the OS allocator APIs. On Windows these APIs are named RtlAllocateHeap, RtlReallocateHeap, and RtlFreeHeap. As a result, any memory allocated via the native heap will be subject to any security hardening implemented by the underlying OS. This includes hardening features such as; DEP, ASLR, Safe-unlinking, and Meta-data validation. The Java Object heap is a memory area that Java uses to track objects that it creates. These objects are garbage collected, so their lifetimes get magically handled for the developer. In some cases, native heap chunks’ lifetimes get bound to Java objects. On Windows, the Java Object Heap memory region is allocated via VirtualAlloc. As previously mentioned, the memory permissions prior to Update 18 were readable, writable and executable. Allocation for this area typically receives a predictable memory address, between 0x22000000 and 0x26000000. This is related to the JRE’s “Class Data Sharing” feature (Oracle). Mark Dowd and Alexander Sotirov wrote about these weaknesses in their 2008 paper, “Bypassing Browser Memory Protections” (Dowd). Common Challenges During the course of developing exploits for memory corruption vulnerabilities within the JRE, several issues arose. The impact of these issues ranges from mildly annoying to downright challenging. The remainder of this section documents five such issues and proposes methods for dealing with them. Debugging Debuggers are extremely useful tools. When dealing with memory corruption bugs, they can be downright necessary. The Java Development Kit (JDK) includes the Java Debugger (JDB), which can be used for debugging Java code. Most exploit developers have a debugger preference. IDA Pro’s debugger and Microsoft’s WinDbg were used for native code debugging. On occasion, it may be necessary to debug Java code and native code simultaneously. Doing so allows following code flow into and out of native method calls. While there are not currently any tools available that do both at the same time, using JDB in conjunction with a native code debugger like WinDbg suffices. Using a native code debugger on a JRE instance started from the browser can be particularly annoying. If too much time elapses while the process is suspended, a thread inside the Java browser plug-in will terminate the child process. This can happen while doing manual analysis or when attaching some debuggers that take too long to load. When this issue is encountered, resuming execution of the inferior results in a single step exception followed by the process exiting. Figure 5 shows this happening in a WinDbg session. Having this happen after single stepping through a long function can be very frustrating. ``` ntdll!DbgBreakPoint: 7709000c cc int 3 0:043> g First chance exceptions are reported before any exception handling. This exception may be expected and handled. eax=00001220 ebx=77094180 ecx=000002f0 edx=77083000 esi=770901f8 edi=770921e0 ebp=770b016e esp=07b7fa90 ebp=07b7f0c0 iopl=0 nv up si ng nz na pe cy cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b efl=00000207 ntdll!LdrpSnapThunk+0x1c1: 770b016e 03c2 add eax,edx 0:046> qn eax=00000000 ebx=00000000 ecx=00000000 edx=00000000 esi=771821e0 edi=771820c0 ebp=7709f2c0 esp=0354fdaa ebx=0354fdaa iopl=0 nv up si pl nz na pe nc cs=0023 ss=002b ds=002b es=002b fs=0053 gs=002b ntdll!NtTerminateProcess+0x12: 7709f2ca 83c404 add esp,4 ``` Figure 5: Process terminated in WinDbg One simple way of dealing with this annoyance is by modifying the `Java.java.lang.ProcessImpl.destroy` native method. This method, shown below in Figure 6, is what the watchdog thread uses to terminate the child process. Preventing the function from calling `TerminateProcess` keeps the child process from getting killed. Both dynamic and static analysis techniques can be used to accomplish this task. Using scripted breakpoints, as seen in Figure 6, is a simple, workable method. Changing the binary code directly within “java.dll” is more permanent, but may have negative effects on some Java applications. However, a majority of Java applications have little reason to terminate processes anyway. ### Spurious Access Violations Another annoying issue that is encountered while using a native code debugger involves spurious access violations. It’s not entirely clear at this time why these exceptions are raised. Some speculate that these are expected exceptions raised from within JIT compiled code. Of course, they could also occur due to terrible code wrapped in a catch-all exception handler. Handling these access violations can be a bit tricky, since they are the same type of crash that indicates memory corruption has occurred. These types of crashes may look like exactly what researchers are looking for, but instead are just a tease. Passing these exceptions along on to JRE appears to have no negative side effects. Unfortunately, an exploit developer may find themselves repeatedly passing exceptions along before execution re-stabilizes. Once the process mellows out, triggering the issue being exploited again yields expected results. This is not ideal. An ideal solution would allow an exploit developer to ignore these spurious exceptions altogether. Finding such a solution is an opportunity for future work. Encoding Conversion Various tutorials covering Java Native Interface (JNI) development show code similar to Figure 7: Native Method With a String Parameter. Figure 7. In them, they consistently call the `GetStringUTFChars` function when accepting string data from Java. This function treats the bytes within the source data as UTF-8 characters and converts them to ANSI C characters. Invalid characters are replaced with question mark (?) characters during translation. Placing shellcode or other binary data in such a string may result in data corruption and ultimately cause an exploit to fail. Several possibilities for dealing with this issue exist. One way is to ensure that all characters that are used are valid UTF-8 characters. This can be particularly challenging when an exploit writer needs to specify an address at a certain offset within a string. Another way is to avoid using characters that will be translated. All UTF-8 characters below 0x7f will not be translated. Although not fully investigated, it may also be possible to utilize alternate encodings or locales to avoid translation issues. This is another area for future work. When conducting heap-spraying, the default UTF-8 encoding can again cause problems. Using Unicode characters produces better results. Additionally, using byte arrays also works well since the array values are represented in memory contiguously. Interestingly, encoding can even be an issue within Java source code. Figure 8 shows a source excerpt created by Michael Schierl. ```java public class OMGWTF { public static void main(String[] args) throws Exception { /* | u006a\u0075|u006e\u006b\u0079|u002a\u002f | u0053|u0073|u0074|u0065|u006d|u002e | u006f|u0075|u0074|u002e|u002f|u0078 | u0070|u0072|u0069|u0066|u0074|u006c|u006e | u0028|u0022|u0048|u0065|u0077|u003f|u0022 | u0029|u003b|u002f|u002a|u0020|u0062|u0079 | u0040|u006d|u0069|u0068|u0069|u0034|u0032 */ } } ``` Figure 8: OMGWTF Source Listing This application appears to do nothing, since the entire body of the `main` function is contained within comments. However, when the application is compiled and executed, it prints "How?", as seen in Figure 9. ``` fear:0~$ javac OMGWTF.java; java OMGWTF ``` Figure 9: OMGWTF Compilation and Execution This happens because Java pre-processes its source code. Part of that processing converts the “/u” escapes into actual characters. Using the “native2ascii” application included with the JDK, the actual characters of the input source code are revealed. Error! Reference source not found. shows that the comment block was terminated inside the chunk of Unicode escapes, the code prints “How?”, and a new block comment being opened. This is a quite peculiar detail, of which many developers are probably not aware. ![Figure 10: True OMGWTF Source Code](image) **Integer Signedness** In the Java programming language, all integers are signed. This can be an issue if you need to represent a number larger than the signed maximum. Using hexadecimal notation in Java source will allow use of larger values. Truncation can occur when casting numbers to types that have a narrower range than the value. To avoid this problem, using the next larger integer type should be sufficient. That is, casting 0xff to a byte will cause an error but casting it to a short is fine. Similarly, 0xffff must be casted to a long integer. Ultimately, these Java integer signedness issues are only a minor annoyance. Working around them is easy, but they are something that developers new to Java end up butting heads with. **Code Reachability** An issue that often arises during vulnerability research is code reachability. These types of issues are especially common when dealing with object oriented code due to the sheer number of possibly levels of abstraction. Researchers routinely need several hours, or even days, to determine if potential vulnerabilities in complex code bases can be exercised. This is a common source of frustration, more so when the results show the vulnerable code is not reachable, or cannot be reached with input that is sufficiently attacker controlled. Java’s “sandbox” restrictions are one source of complication. For example, when auditing, a researcher might find a bug in a native method reachable via the “sun” namespace. Code within this namespace cannot be accessed from unprivileged applications. CVE-2009-3869, which is further described later in this document, is one such issue. Deducing reachability for CVE-2009-3869 required significant manual work. Thankfully, certain Java language features may expose additional native code include. Two such features are reflection and sub-classing. Reflection, which allows introspection, allows a programmer to quickly enumerate class methods, access data or methods in alternative ways, or even modify object data dynamically. Sub-classing allows a programmer to access or alter private methods that are not accessible via an object instance. Exploitation Techniques It is human nature to break larger, complex tasks down into smaller, more achievable pieces. Developing exploits is one such task that benefits greatly from such decomposition. Focusing on smaller pieces allows crafting more generalized and versatile techniques. These techniques form the building blocks for exploit development. The rest of this section discusses how to apply such techniques to exploit various types of vulnerabilities. First, a set of contrived native code examples are used for illustrative purposes. Following that, exploits for real-world JRE native code vulnerabilities are examined. Contrived Examples A custom JNI library was created for demonstrative purposes. While this library provides only four methods, it exposes five distinct types of native code vulnerabilities. Once properly installed within the JRE, unprivileged Java Applets can create an instance of the Vuln class and call the public methods shown below in Figure 12. Arbitrary Call Experienced exploit developers will appreciate the elegance and simplicity of this type of vulnerability. When exploiting memory corruption bugs, this is the primitive that is primarily sought. Generally this is what you get whenever data that is control-flow sensitive, such as a function pointer or return address, gets corrupted. In the simplest case, exploiting this primitive only requires knowing the address of user controlled data. More complicated cases may require creating multiple inter-dependent structures, dealing with input restrictions, or dealing with ASLR and DEP. The lack of ASLR and DEP support within JRE 6, combined with desktop Windows systems default DEP setting of "OptIn", make it possible to abuse the data segment of any loaded Java DLLs. On systems where the DEP policy has been set to OptOut or AlwaysOn, the publicly available "msvcr71.dll" ROP chains can be utilized. One elegant thing that may be possible is to disable the Java Security Manager by making a single call. This would allow the applet to continue execution with full user privileges. Developing such a technique is currently a future work item. Arbitrary Write Commonly referred to as "write what where" or "write 4" issue, the ability to write user-controlled data to user-controlled locations is an extremely powerful exploit primitive. When armed with such a primitive, an attacker can overwrite memory contents with surgical precision. The general method for taking advantage of this paradigm is to directly alter control-flow sensitive data. For an exploit to be reliable, the memory location of overwrite targets must be known. Since JRE 6 doesn’t opt-in to ASLR, this isn't much of an issue. One obvious target is the control-flow sensitive data stored on the stack. Unfortunately, targeting this data does not usually result in a reliable exploit. Targeting data within the global data segments of various modules tends to be more reliable. Another popular overwrite target are the function pointers within the Process Environment Block (PEB) of a Windows process. Although some time has been invested in determining JRE-specific overwrite targets, nothing viable has been found at this time. Discovery of such memory locations are one potential area for future work. **Exploit: ExecDllData** The *ExecDllData* exploit included with this paper demonstrates how easy exploitation is without DEP and ASLR. It does this by first writing a payload to the data segment of the "msvcr71.dll" library, via the *Write4* method of the custom *Vuln* JNI class. Next, the exploit uses the *ArbCall* method to direct the flow of execution to the written data. In a default configuration, this exploit is completely reliable. **Format Strings** The subtle mistake of passing user-controlled data as a format specifier can become critical. Format string vulnerabilities are the favorite bug class of many exploit writers. No doubt this is because they provide a multitude of exploit primitives. The best possible exposure to such an issue allows full read and write access to arbitrary memory locations. Unfortunately, format string bug lovers will be sad to hear that the C runtime used by JRE 6 has the "%n" specifier disabled. This mitigation was introduced in the 2005 version of the Visual C runtime (Tom Gallagher). Disabling the "%n" specifier prevents using format string bugs to directly cause an arbitrary write primitive. However, format string bugs might still be exploited in a couple of ways. First, they could still be used to leak stack memory contents. This type of information leakage is becoming increasingly important due to widespread adoption of ASLR. Second, they could be used to cause a buffer overflow that might not be reachable otherwise. If, for example, the input string length was checked to not exceed 1024 bytes. Using string expansion with a format string like "%1024xAAAABBBB" may still trigger a buffer overflow. **Stack Buffer Overflow** Sometimes ambiguously referred to as “Stack Overflow” vulnerabilities, stack-based buffer overflows are possibly the oldest documented type of memory corruption. Overflowing a buffer stored on an application’s stack can lead to overwriting control-flow sensitive data. This includes saved register values, one of which is a function’s return address. On the Windows platform, attackers can also abuse Structured Exception Handlers (SEH) stored on the stack. In the face of various exploit mitigations, it may be necessary to utilize saved local variable pointers or other data. Luckily, Java places many interesting things on the stack. The parameters and local variables within JNI methods include several C++ object pointers. Corrupting these values will result in a condition similar to an arbitrary call. However, as previously mentioned, encoding issues may complicate exploitation. **Exploit: BofStack** The *BofStack* exploit demonstrates executing data within the stack of the java.exe process. Since Java does not opt-in to DEP or ASLR, attackers can simply execute the stack without issue. This results in very reliable code execution. However, encoding issues complicate matters. Encoding conversion within the native method corrupts traditional payloads. To compensate, this exploit uses an encoded payload that only contains characters between 0x01 and 0x7f. Since a format string bug exists in the vulnerable JNI *sprintf* method, the percent symbol is also avoided. To achieve code execution, this exploit overwrites a Structure Exception Handler with a pointer to a "pop, pop, ret" instruction sequence. This sequence causes execution of the SEH *NextPtr*, which has been set to a small stub to begin setting up for a UTF8-compatible decoder and jump over the SEH Handler. More decoder setup code, the decoder itself, and an encoded backwards jump are placed after the SEH record. Once decoded and executed, the jump leads to the beginning of the payload, where another decoder setup stub, decoder, and the final encoded payload have been placed. Exploit: BofStackSpray Unlike the BofStack exploit, the BofStackSpray exploit demonstrates executing data within the Java Object Heap of the java.exe process. In the absence of DEP and ASLR, attackers can simply conduct a heap-spray and execute the Java Object Heap directly. By placing the payload in this region, encoding issues do not pose a challenge. After the heap-spray concludes, the SEH overwrite technique is used to redirect the flow of execution to a hardcoded address in the Java Object Heap. Subtle differences in the memory layout may cause this exploit to be less than 100% reliable. Exploit: BofStackRopNoSEH Building on the BofStackSpray exploit, the BofStackRopNoSEH exploit demonstrates that forcibly enabling DEP is not sufficient for preventing exploiting. Like BofStackSpray, this exploit uses a Java heap spray to place a payload in a predictable memory location. However, instead of leverage SEH to redirect the flow of execution, this exploit overwrites the env object pointer JNI parameter. The resulting string is then prepared to be passed back to Java via a call to NewStringUTF. This call is made by dereferencing the smashed pointer, providing the necessary control over execution flow. In order to circumvent DEP, the Java heap spray contains four different sections. The payload begins with a string of pointers to the second section. This data is used as a virtual function table pointer to determine where to load a function pointer. The second section contains a long string of pointers to return instructions, sometimes called “ROP nops”. Third, the Corelan Team “msvcr71.dll” ROP chain is appended. Finally, shellcode to execute “calc.exe” concludes the payload. However, this payload is not, by itself, enough to bypass DEP. The final necessary piece to this exploit is a multi-stage stack pivot. A stack pivot usually turns control of execution flow into complete control over the data pointed to by the stack pointer. The first gadget, which lives in the Java heap spray, adjusts the stack pointer to point to the location of the second gadget within the stack buffer itself. Since the second gadget is in the stack buffer, care is taken to ensure the bytes that comprise them survive UTF-8 conversion. The second stage of the stack pivot uses a “pop esp, ret” instruction sequence. When executed, this points the stack pointer to the second section of the Java heap spray, which ends up processing the “ROP NOPs”. After processing these, the ROP chain stager is evaluated and execution ultimately flows to the shellcode. By combining a complex, multi-part Java heap spray with a multi-stage stack pivot, it was possible to avoid triggering the protection DEP provides. Using a Java heap spray may reduce the reliability of this exploit. However, this was determined to be an adequate trade-off in the face of dealing with encoding conversion issues. Theoretically, it should be possible to implement an alternative method. Doing so is left as an exercise for motivated readers. Heap Buffer Overflow Reliably exploiting heap buffer overflow vulnerabilities is one of the most complicated and unpredictable tasks an exploit developer can undertake. Exploiting these types of issues largely depends on what data was corrupted. The best case scenario is when control-flow sensitive application data, such as an object or function pointer, gets corrupted and immediately used. A less lucky scenario would require an exploit author to carefully manipulate program state so that the corrupted data gets used. This can be very time consuming. In an effort to generalize and simplify heap overflow exploitation, researchers have chosen to target the underlying heap implementation of the operating system in hopes of producing advantageous exploit primitives. Unfortunately, operating systems have numerous, differing heap implementations. The details of each implementation vary, even between minor versions of the same operating system. For example, modern versions of Windows have multiple implementations and can switch between them at runtime based on application behavior. Corrupting data within the Java Object Heap is theoretically impossible. Causing memory corruption in this region is likely to be the direct result of a critical vulnerability within the Java memory management itself. That is, such a bug would be an application-specific memory corruption bug within JVM native code. Due to the required time investment, many researchers avoid even attempting to develop exploits for heap overflow bugs. They simply believe that searching for bugs that are more readily exploited will take less time and effort. For this reason, in-depth research regarding Java-specific methods for exploiting heap overflows was eliminated from the scope of this paper. While it’s a potential area for future work, it doesn’t appear to be an especially promising one. Real-World Exploits For further demonstrative purposes, two exploits from the Metasploit Framework were chosen. These two memory corruption issues were chosen since they illustrate several of the techniques discussed in this paper. Both exploited bugs are caused by stack-based buffer overflow vulnerabilities. The following describes these exploits in exquisite detail. CVE-2009-3869 The first exploit was chosen since it serves as an excellent example of how finding alternative code paths can avoid the sandbox and enable reaching a native method which is not directly accessible. This exploit takes advantage of a stack buffer overflow in the `Java_sun_awt_image_ImageRepresentation.setDiffICM` function. Causing a stack buffer overflow with this vulnerability depends on a relationship between two different `IndexColorModel` instances, hence the “ICM” in the vulnerable function name. By creating two slightly different color model arrays, data on the stack, including the SEH handler, is overwritten with the value 0x24012401. The vulnerable function is a private native method, called from the `ImageRepresentation.setPixels` method within the “sun.awt” namespace. Because it is in the “sun” namespace, the vulnerable code is not directly reachable from an unprivileged Applet. Searching the Internet led to some stack traces that indicated `ImageRepresentation.setPixels` is reachable via “java.awt.image”, which is usable from within Applets. The stack traces showed that this method is called from within the `ImageFilter` class. The `ImageFilter` class takes data from an `ImageProducer` object and passes it to an `ImageConsumer` object. Getting such a filter to be utilized entails passing a producer and a filter when constructing a `FilteredImageSource` object. The resulting object is then passed to the `createImage` function. When the created image is drawn, several of the filter’s methods will be executed. By overriding the `setDimensions` method, it becomes possible to call the `setPixels` method of the `ImageConsumer`. With the stack corrupted and the flow of execution redirected to 0x24012401, the only task left is executing the payload. To accomplish this, the payload and a string of NO-OP instructions are decoded from two applet parameters. The resulting values are then used to conduct a Java heap spray. After this, the address 0x24012401 contains a NOP sled followed by the payload. As this vulnerability was fixed prior to Update 18, the Java object heap was still readable, writable, and executable. No attempt to bypass DEP was necessary. CVE-2010-3552 The second exploit that was chosen is a far less complicated issue. This vulnerability manifests when processing an embedded applet that contains both a “launchjnlp” and a “docbase” parameter. When triggered, the value of the “docbase” parameter is copied into a fixed sized buffer on the stack. Unlike most JRE vulnerabilities, this memory corruption occurred in the context of the Java plugin itself. That is, the corrupted stack was one of a thread within the browser itself. Since the bug manifests in the browser itself, it is not possible to rely on JRE’s lack of DEP. Many modern browsers call the `SetProcessDEPPolicy` Windows API to permanently enable DEP process-wide. For this reason, the exploit utilizes a ROP chain based on “msvcr71.dll” that predates the White Phosphorus and Corelan Team chains. This ROP stager executes the payload within memory that is readable, writable, and executable to avoid DEP. Because the vulnerable function does not contain any stack cookies, a traditional return-address overwrite was used. Exploiting stack buffer overflows in this manner does not require a stack pivot. As a result, this exploit is extremely reliable. Conclusion Version 6 of Oracle’s Java Runtime Environment proves to be a soft-target in the face of state-of-the-art in exploitation technologies. Several challenges exist when exploiting memory corruption vulnerabilities within JRE, but none are insurmountable. The lack of ASLR and NX compatibility put it nearly a decade behind modern exploitation mitigations. Although version 7 of Oracle’s JRE has been released, widespread adoption is yet to occur. This version brings a greatly improved security posture. However, features like Class Data Sharing continue to provide possibilities for bypassing ASLR. Recently, the first update to JRE 7 was released. Of the twenty vulnerabilities disclosed, only three of them did not affect JRE 7. This shows that apart from changing compilers, little has been done to proactively increase security (Oracle). For these reasons, combined with the vast size of the Java’s install base, Java poses significant risk to the Internet ecosystem. As such, it will remain a primary target for attackers and updates will continue to include numerous security issues for the foreseeable future. Recommendations Dealing with the risk that comes along with the having the JRE installed is relatively easy. There are several things that Internet users can do to protect the ecosystem. The best way to have complete protection from potential vulnerabilities in the JRE is to completely uninstall it. Unfortunately, this is not an option for many users since legitimate use cases still exist. Some such cases include VPN connectivity and web-based file transfer applets. Since the primary attack vector is via the web, the next best option to completely uninstalling is disabling the browser plug-in. If a JRE is required for accessing some web sites, only enable the plug-in for specific sites. In Chrome’s default configuration, users are prompted for confirmation before a Java Applet can be executed. This is an excellent compromise. Another decent strategy is using the 64-bit version. Using the 64-bit version of the JRE has several benefits. First, the x64 architecture contains a mandatory No Execute (NX) policy to prevent executing data. Changes made to default function calling conventions makes creating ROP chains significantly more challenging. Finally, since x64 adoption is still in progress, an attacker may be less likely to develop their exploit to target the x64 version of Java. Use of Microsoft Enhanced Mitigation Experience Toolkit (EMET) is often recommended for mitigating memory corruption vulnerabilities. However, due to a conflict between the process architecture of Java and a limitation of EMET, the mandatory ASLR feature is ineffective with JRE 6. That is, even with EMET mandatory ASLR enabled, “msvcr71.dll” and the Java Object Heap will still receive predictable addresses. This is due to “EMET’s mitigations only become active after the address space for the core process and the static dependencies has been set up.” (Roths) Oracle is poised to do the most for Java security. Releasing an update to JRE 6 that opts-in to exploit mitigations such as ASLR and DEP would improve the current situation. Oracle could also improve Java security by investing more in code audits, fuzzing, and static analysis. Using a proactive approach will eliminate bugs before they become widely deployed vulnerabilities. Future Work Due to the sheer size and complexity of the JRE, many areas of future work have been identified. Planned items include; Web Start attack surface, LiveConnect, alternate supported encodings, JIT spraying, and documenting Java crash reports. Additionally, plans exist to continue understanding how Java constructs manifest in native code. This includes research on; user-controllable data segment variables, global overwrite targets, attacks on Java-specific stack and heap contents, disabling the security manager, and class data sharing. This document is a living document since additional research is ongoing. Updated versions of this document will be released as portions of research conclude. Bibliography About The Author Joshua J. Drake is a Senior Research Consultant with Accuvant LABS. Joshua focuses on original research in areas such as vulnerability discovery and analysis, exploitation technologies and reverse engineering. He has over 10 years of experience in the information security field. Prior to joining Accuvant, he served as the lead exploit developer for the Metasploit team at Rapid7, where he analyzed and successfully exploited numerous publicly disclosed vulnerabilities in widely deployed software such as Exim, Samba, Microsoft Windows, Office, and Internet Explorer. Prior to that, he spent four years at VeriSign’s iDefense Labs conducting research, analysis and coordinated disclosure of hundreds of unpublished vulnerabilities. Accuvant LABS Accuvant LABS is the world’s best and most respected attack and penetration team. Since 2002, Accuvant LABS has provided penetration testing, application and enterprise security assessments, vulnerability research and training to more than 2,000 clients across industry verticals. Experts from the team have won numerous awards and been featured in articles published by the Associated Press, CSO Magazine, Financial Times, SC Magazine, The New York Times and The Register, among others, and regularly speak at national information security conferences.
{"Source-Url": "https://media.blackhat.com/bh-ad-11/Drake/bh-ad-11-Drake-Exploiting_Java_Memory_Corruption-WP.pdf", "len_cl100k_base": 11065, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 60296, "total-output-tokens": 13507, "length": "2e13", "weborganizer": {"__label__adult": 0.0004551410675048828, "__label__art_design": 0.0003142356872558594, "__label__crime_law": 0.0011529922485351562, "__label__education_jobs": 0.0004703998565673828, "__label__entertainment": 9.012222290039062e-05, "__label__fashion_beauty": 0.00015342235565185547, "__label__finance_business": 0.0002008676528930664, "__label__food_dining": 0.0002779960632324219, "__label__games": 0.0009965896606445312, "__label__hardware": 0.0014743804931640625, "__label__health": 0.0004503726959228515, "__label__history": 0.00023627281188964844, "__label__home_hobbies": 7.861852645874023e-05, "__label__industrial": 0.0004258155822753906, "__label__literature": 0.00026226043701171875, "__label__politics": 0.00023877620697021484, "__label__religion": 0.00047135353088378906, "__label__science_tech": 0.03985595703125, "__label__social_life": 8.374452590942383e-05, "__label__software": 0.0157623291015625, "__label__software_dev": 0.935546875, "__label__sports_fitness": 0.0002846717834472656, "__label__transportation": 0.00032019615173339844, "__label__travel": 0.00016963481903076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 60354, 0.05243]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 60354, 0.49459]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 60354, 0.87939]], "google_gemma-3-12b-it_contains_pii": [[0, 129, false], [129, 311, null], [311, 6005, null], [6005, 8983, null], [8983, 13381, null], [13381, 15896, null], [15896, 19573, null], [19573, 23960, null], [23960, 26428, null], [26428, 29161, null], [29161, 30191, null], [30191, 33372, null], [33372, 35735, null], [35735, 38442, null], [38442, 42783, null], [42783, 47689, null], [47689, 51459, null], [51459, 55547, null], [55547, 59032, null], [59032, 60354, null]], "google_gemma-3-12b-it_is_public_document": [[0, 129, true], [129, 311, null], [311, 6005, null], [6005, 8983, null], [8983, 13381, null], [13381, 15896, null], [15896, 19573, null], [19573, 23960, null], [23960, 26428, null], [26428, 29161, null], [29161, 30191, null], [30191, 33372, null], [33372, 35735, null], [35735, 38442, null], [38442, 42783, null], [42783, 47689, null], [47689, 51459, null], [51459, 55547, null], [55547, 59032, null], [59032, 60354, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 60354, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 60354, null]], "pdf_page_numbers": [[0, 129, 1], [129, 311, 2], [311, 6005, 3], [6005, 8983, 4], [8983, 13381, 5], [13381, 15896, 6], [15896, 19573, 7], [19573, 23960, 8], [23960, 26428, 9], [26428, 29161, 10], [29161, 30191, 11], [30191, 33372, 12], [33372, 35735, 13], [35735, 38442, 14], [38442, 42783, 15], [42783, 47689, 16], [47689, 51459, 17], [51459, 55547, 18], [55547, 59032, 19], [59032, 60354, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 60354, 0.02888]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
1b0f6edf88066defd78dadbf83fe2aeece3ba4dd
Instructor (Julie Zelenski): Hi there. Good afternoon. Welcome to Monday. Today’s a big day. A big day, right? We’ve got a couple of recursive backtracking examples that we didn’t get to on Friday that I’m gonna talk you through today, and then we’re gonna talk a little bit about pointers, the long anticipated introduction to one of the scarier parts of the C++ language as kind of a step toward building linked lists and the recursive data idea that we will study today and continue into Wednesday. And so the material at this point jumps around a little bit, right? We go back and pick up some of the pointers in array information that was in earlier chapters. Linked lists are covered a little bit later in kind of a different context that is – you can do but it’s not the best match to how we’re covering it here. And then handout 21, which I gave out today, is more similar to the way I’m going to be showing you linked lists and its concepts. Once we get the linked list up, we go back to the reader of chapter seven looking at algorithms and big O. We’ll spend actually several days on that on sorting, and analysis of algorithms, and things like that. [Inaudible] you guys should be working on that, right, coming in on Wednesday, right, some good practice getting your recursive decomposition skills down and figuring out how to work your way toward the base case and things like that. And then what goes out at [inaudible] will be actually kind of your first really big complete program, right, it is the venerable bottle that you may have heard of because actually it’s such a legend in the 106 program that kind of brings together a lot of the stuff, ADTs, and recursion, and all sorts of things we study all term kind of build one big complete program now that we’ve kind of got a bunch of skills to put together on that, which will go out on Wednesday when assignment three comes in. Note that tomorrow’s Super Tuesday, so if you are resident of a state who is one of the 24 or so who are participating in tomorrow’s big primary, be sure to get out and vote. Anything administratively you want to ask about? Questions? How many people have done, you know, at least one of the problems on the recursion problem set now? Oh, yeah, yeah. How many of you have done all of them? Not quite. Okay. Anybody who’s gotten along the way have any insights that they want to offer up to their – those who are a little further behind you? Any way of lending a hand to your fellow student? Student: Draw a diagram. Instructor (Julie Zelenski): Draw a diagram. What kind of diagrams have you drawn? Student: Like, each step [inaudible] trying to figure out what it’s doing if I go all the way down and it’s a little difficult. **Instructor (Julie Zelenski):** So he’s suggesting here, right, start with, you know, one of your bigger cases. Maybe that’s gonna take four or five, you know, calls before it hits that base case, and watch it do its work, right, think about, okay, what the first call makes, what the second call makes, what the third call makes, make sure you’re working toward that base case, and see how it both goes down into the calls and then unwinds its way back out, right, can definitely help a lot. For the ones that have a pretty high branching factor, that gets a little bit tricky, right, to sort of – [inaudible] has a five way branch with a five way branch under it, it would go a little crazy, so you’d have to pick some pretty small examples for the more complex problems. But certainly for the simple cases, right, being able to do that. Anything else? Yeah. **Student:** [Inaudible]. **Instructor (Julie Zelenski):** Yes. So the [inaudible] we gave you, right, you really do need to match our prototypes, but they are very likely in many cases to not be enough, right, they’ll get you started but there’s gonna be more housekeeping. You’re gonna be keeping them along the way, so probably a lot of them are just gonna be those one line wrappers that make a call into your real recursive function that then picks up the outer state plus some other kind of housekeeping to work its way down the recursive call. So yeah, it’s definitely true. A lot of little one-line wrappers in our prototype going into your recursive call. Over here. **Student:** This doesn’t really have to do with recursion, but go back to [inaudible] I guess C++ or header file, you need to, like, physically move it to the right folder. **Instructor (Julie Zelenski):** Yes. Yeah, sure. **Student:** Add it in Visual Studio doesn’t do it. It never compiles. **Instructor (Julie Zelenski):** Yes, so you – when we give you a .cpp file, right, with some code included in the project, you really have to get it into the right place and get your project to include it, otherwise it’ll turn up saying I’ve never heard of this lexicon, you know, it will fail to compile or link one or the other, depending on which step it got hung up on. So if we give you some new code, make sure you incorporate it into your project, right, so that it actually is kind of built into it, and you can use that code in solving your problems. Anything else? Okay. Oh, wait. **Student:** [Inaudible]. **Instructor (Julie Zelenski):** The what? **Student:** Failure cases? Instructor (Julie Zelenski): Yes, failure cases, right? Like if – often you get focused on what the truth will be, what the right answer – get to the success case, and then kind of completely ignore these other things about what about the dead ends, right, the things that are going nowhere. For example, on the phone T9 Text one, right, there definitely are some cases where you have to kind of stop things going down dead ends, and if you don’t, right, you can get into this sort of nasty exhaustive, you know, infinite recursion that can really make quite a mess of things. So making sure you’re thinking both about how you know when you got to where you want to be, and where you get to something that you don’t want to be but that you can back out of. So the two samples I want to do are both based on the same backtracking pseudocode that we were using on Friday. I just want to go through them. I’m gonna do a little bit less attention to the code and a little bit more attention to the problem solving because at some point I think the problem solving is really where the tricky stuff comes on. And then the kind of – turning it into code, there’s some details there but they’re not actually as important, so I’m gonna de-emphasize that just a little bit here and think more about solving it. So this is the pseudocode for any form of a backtracker, right, that has some sort of, you know, failure and success cases, like when we run out of choices, we’ve hit a dead end, or we’ve hit a goal state, we’ve – you know, there’s no more decisions to make, right, is this want to be, yes or no, and then otherwise there are some decisions to make. And for all the possible decisions we could make, we’re gonna try one, be optimistic about it working out, make that recursive call that says that choice was where I wanted to be. If it returns true, or whatever the success return value is, then we return true, right? No need to look any further. That choice was good enough. If it didn’t work then we gotta unmake – try some other choices. If we try all the things available to us and no case, right, did solve ever return true, then we can only conclude that the configuration as given to this call was unsolvable, which is where that return false causes it to back up to some earlier call and start unmaking that next layer of decisions, and then recur further down, eventually either unwinding the entire recursive [inaudible] all the way back to the beginning and saying it was totally unsolvable no matter what, or eventually finding some sequence of decisions that will lead to one that will get us to a success case. So the one I’m gonna show you here is the sudoku, which is those little puzzles that show up in newspapers everywhere. They’re actually not attributed to – apparently, to the Japanese, but it apparently got a lot more popular under its Japanese name than it did under the original English name for it. And the goal of a sudoku, if you haven’t ever done one, right, is it’s a nine by nine grid in which you’re trying to place numbers, and the requirement for the numbers is such that within any particular row, or any particular column, or any particular block, which is a three by three subsection of that, the numbers one through nine each appear once. So you can never have more than two ones in a row, two twos in a column, three threes in a block, or anything like that. And so there has to be some rearrangement, or, in fact, permutation of the numbers one through nine in each row, in each column, and each block, such that the whole puzzle kind of works out. logically. So when it’s given to you, you know, usually some fraction of the slots are already filled in, and the goal for you is to fill in those remaining slots without violating any of these rules. Now the sort of pure sudoku solvers don’t really use guessing. It’s considered, actually, poor form, you know, that you’re supposed to actually logically work it out by constraints about what has to be true here versus what has to be true there to kind of realize that – what choices you have. We’re actually not gonna be a pure, artistic, you know, sudoku solver. What we’re gonna do is we’re actually gonna use a brute force recursive algorithm that’s just gonna try recursive backtracking, which is to say, make an assignment, be optimistic about it, and if it works out, great, and if not, we’ll eventually come back to that decision and revisit it. So what we have here basically is a big decision problem. You know, of the 81 squares on here, you know, about 50 of them, right, need to be chosen. For each of those 50 squares, right, we’re gonna do them one at a time and recur on the remaining ones. So, you know, choose one then we’ll just go left to right from the top. Choose that one at the top, make an assignment that works, and so that’s what we’ll use the context we have in problem here. So, for example, if you look at this first row, there’s a one in this column so we can’t use one. There’s a two in that block so we can’t use two, but there’s not a three in either that row, or that column, or that block, so we’ll say, well, three looks good, you know, just trying the numbers in order. We’ll be optimistic, say that works, and say, well if we planted a three here, could we recursively solve the remaining 49 holes and work it out? And so we get to that next one – we look at this one and say, okay, well we could put a one here, right, because there’s not a one in this column, not a one in that row, not a one in that block, so we’ll kind of move on. I’m gonna do a little demo of that for you. And maybe it’s to kind of keep moving our way across, and only when we get to a dead end based on some of our earlier decisions will we unmake and come back. So let me – okay. So that’s the same set of numbers there. So I put the three up in the corner. And so it puts the one here thinking, okay, that looks good. So it gets to the next square over here and then the one can’t be used because it’s actually already in use both in that column and in the row we’re building so far, but two can be used. Two doesn’t conflict with anything we have so far. And so it just keeps going optimistically, right? At that stage over there it turns out almost all the numbers are in use. Most of the numbers all the way up through seven and nine are there, and seven’s in its column. So, in fact, the only number that that could possibly be is nine, so the only choice we have here to try is nine, and then we’ll place the seven next to it. And so now we have a whole row that doesn’t conflict with any of its blocks this way, and then we just keep moving on, right, so that is to keep kind of going from top to bottom, left to right. We’ll place that four. We’ll place another one. Place a three. So it’s actually choosing them – it’s actually going in order from one to nine just picking the first one that fits, right, so when we get to the next square here, right, it can’t use one, it can’t use two, it can’t use three, it can’t use four, it can’t use five because they’re all in that row. It can’t use six because it’s in the column. It can’t use seven because it’s in that row, but it should be able to use eight, and so it’ll place an eight there. So it just kind of examines them in sequence until it finds the first one that doesn’t violate any things already decided, and then kind of moves optimistically forward. So about this point, right, we’re doing pretty well, but we’re starting to run into some troubles if you look at the next one over here. It’ll place the six there, but then it will – once the six is placed there, then it looks to the right and it says, oh, I need to put a nine there. That’s the only number left. It tries all the numbers one, two, three, four, and it says that isn’t gonna work. And so it actually fails on the rightmost column, which causes it to back up to the one right before and it says, well, why don’t you try something else here? Well, it looks at seven, eight, and nine, none of those can work either, so it’s actually gonna back up even further and then say, well what if we try to put a nine here? That also doesn’t work. So now it’s gonna start seeing that it’s kind of unwinding as the constraints we have made have kind of got us down a dead end and it’s slowly working its way back out to where the original bad decision was made. So it tries again on moving nine in here, and moving across, right, but again, kind of, you know, working its way forward but then kind of backing its way up. And let me go ahead and just run it faster so you can kind of see it. But, you know, it’s working on that row for a while, but essentially to note that the top row stays relatively constant. It kind of believes that, well, that first three must have been good because, you know, we’re getting somewhere on that, and it keeps kind of going. You can see that the activity is kind of always at the kind of tail end of that decision making, which eventually, right, worked its way out. And so it turns out, like, those three ones that we did put in the first spots were fine. That is, choices, right, it did work out. We didn’t know that when we started, right, but it was optimistic, right, it put it down and then kept moving, and then eventually, right, worked out how the other things that had to get placed to make the whole puzzle be solvable. And so this thing can solve, actually, any solvable sudoku, right, and if it’s not animating, instantaneously, even though it is really doing it in a fairly crude way, right, it’s basically just trying everything it can, moving forward, and only when it kind of reaches a contradiction based on some of those choices that they will – that can’t possibly be because at this point I’m forced into a situation where there’s nothing that works in this square, so it must be that some earlier decision was wrong. And you notice that when it backs up, it backs up to the most immediate decision. So if you think of it in terms of recursive call, here’s your first decision, your second decision, your third decision, your fourth decision, your fifth decision. If you get down here and you’re, like, trying to make your eighth decision, and there’s nothing that works, right, the decision that you come back to will be your seventh one. The one right before it. You don’t go throw everything away and start over. You don’t go all the way back to the beginning and say, oh, well that didn’t work, let’s try again. It’s that you actually use the kind of context to say, well, the last decision I made was probably the one that needs a little fixing. Let me just back right up to that one. That’s the way the calls unwind, and it says we’ll pick up trying some other options there. Which ones have we not tried on that one? And then go forward again, and again, if you get down to that eighth decision and you’re still stuck, you come back to the seventh decision again, and only after kind of the seventh decision has gone back and forth with the eighth unsuccessfully through all its options would we eventually return to that sixth decision, and potentially back to the fifth, and fourth, and so on. The code for this guy, a little abstracted that should very much fit the pattern of what you think recursive backtracking looks like, and then the kind of sort of goofier parts that are about, well, what does it mean to test a sudoku for being a sign of having conflicts is actually then passed out into these helper functions that manage the more domain specific parts of the problem. So at the very beginning it’s like, find an unassigned location on the grid, and it returns the row and column by reference. It turns out in this case those are reference parameters. So [inaudible] searches from top to bottom to find the first slot that actually does not have a value currently in it. If it never finds one, so exhaustively searched the grid and didn’t find one, then it must be that we have a working sudoku because we never put a number in unless it worked for us, and so if we’ve managed to assign them all, we’re done. If this didn’t return true, that meant it found one, and it assigned them a row and column, and then what we’re gonna go through the process of is assigning that row and column. So we look at the numbers one through nine. If there are no conflicts for that number, so it doesn’t conflict with the row, column, or block, that number isn’t already in use in one of those places, then we go ahead and make the assignment, and then we see if we can solve it from here. So having updated the grid to show that new number’s in play, you know, if we move on, the next [inaudible] of sudoku will then do a search for find unassigned location. This time the one that we previously found, right, has been assigned, so it actually won’t get triggered on that one. It’ll look past it further down into the puzzle, and eventually either find the next one to make a call on, and kind of work its way through, or to – you have to solve the whole thing. If it didn’t work, so we made that assignment to the number nine, and we went to solve, and eventually this tried all its possibilities from here and nothing came up good, then this unassigned constant is used to unmake that decision, and come back around, and try assigning it a different number. If we try all of our examples, so for example, if we never find one that doesn’t already conflict, or if we try each one and it comes back false, right, that this return false here is what’s triggering the backtracking up to the previous recursive call to reconsider some earlier decision – the most recent early decision for this one, and say that was really our mistake, right, we’ve got to unmake that one. So it should look like kind of all the recursive backtracking all through looks the same, right? You know, if we’re at the end, here’s our base cases for all our options. If we can make this choice, make it, try to solve it, be optimistic, if it works out, return. Otherwise, unmake it, allow the loop to iterate a few more times trying again. If all of those things fail to find a solution then that return false here will cause the backtracking out of this decision. I just let it become constant. You know, I made it up. I used negative one, in fact, just to know that it has no contents. Just specific to sudoku in this case. Now would be a great time to ask a question. You guys kind of have that sort of half-okay look on your face. It could be I’m totally bored. It could be I’m totally lost. **Student:** Where do row and column get assigned? **Instructor (Julie Zelenski):** So they – finding assigned location takes [inaudible] reference. So if you look at the full code for this – this is actually in the handout I gave out last time, and so there’s a pass by reference in that function, and it returns true if it assigned them something, and then they have the coordinates of that unassigned location. Question over here. **Student:** How’d you know to write sol sudoku as a bool rather than as, like, a void function? Instructor (Julie Zelenski): Well, in this case – that’s a great question, right, in most cases in recursive backtracking I am trying to discover the success or failure of an operation, and so that’s a good way to tell because otherwise I need to know did it work. And so if I made it void then there had to be some other way I figured it out. Maybe, you know, that you have to check the grid when you’re done and see that it has – no longer has any unassigned locations, but the bool is actually just the easiest way to get that information out. So typically your recursive backtracking machine will probably return something. Often it’s a true or false. It could be, you know, in some other case, just some other good know value, and some other sentinel that says, you know, bad value. So, for example, in the finding an anagram of the words it might be that it returned the word if it found one, or returned an empty string if it didn’t. So using some sort of state that says here’s how – if you made the call, you’ll know whether it worked because we really do need to know. Make the call and find out whether it worked. Well, how are we gonna find out? One of the best ways to get that information is from a return value. Way in the back? Student: Exactly does the return calls trigger in the backtracking [inaudible]? Instructor (Julie Zelenski): So think about this as that, you know, it’s hard to think about because you have to kind of have kind of both sides of the recursion in your head, but when – let’s say we’re on the fifth decision, right? We make the call to solve the sudoku that’s gonna look at the sixth decision, right? If the sixth decision fails, it says, I looked at all my choices and none of them worked, right? It’s that that causes the fifth decision to say, well I – here go solve for the sixth decision down. Well the sixth decision came back with a false. It got to this case after trying all its options, then it’s the fifth decision that then says, okay, well then I’d better unmake the decision I made and try again. And so at any given stage you can think about the caller and the callee, it’s saying, I’m making a decision. You tell me if you can make it work from here. If the delegate that you passed on the smaller form of the recursive problem comes back with a return false, and then I’ve tried everything I could possibly do, but there was no way. The situation you gave me was unworkable. And that says, oh, okay, well you know what, it must have been my fault, right? I put the wrong number in my box. Let me try a different number and now you try again. And so they’re constantly kind of these – you have to keep active in your mind the idea that there’s this whole chain of these things being stacked up, each of them being optimistic and then delegating down. And if your delegate comes back with the – there was nothing I could do, then you have to revisit your earlier optimistic decision, unmake it, and try again. And so that – this is the return false to the outer call, the one that made the solved call that unwinds from. Way over here? Student: [Inaudible]? Instructor (Julie Zelenski): Pretty much any recursive piece of thing can be reformed into a backtracking form, right? You need to – to do a little bit like the [inaudible] we tried last time, we’ll take a void returning thing and make it return some true or false, and there’s kind of, like, make a decision and be optimistic, see if it worked out. But that kind of rearrangement of the code should work with pretty much all your standard recursion stuff. So any kind of puzzle that actually, like, all these kind of jumble puzzles, and crossword puzzles, and things that involve kind of choosing, right, can be solved with recursive backtracking. It’s not necessarily the fastest, right, way to get to a solution, but it will exhaustively try all the options until a sequence of decisions, right, leads you to a goal if there is a goal to be found. And so that if you can think of it as – if you can think of your problem as a decision problem – I need to make some decisions, and then from there make some more decisions and so on, and eventually, right, I will bottom out either at a goal or dead end, then you can write a recursive problem solving – backtracker to solve it. Way in the back? Student: This doesn’t have anything to do with recursion, but how did you slow down the simulation? Instructor (Julie Zelenski): How did I slow it down? I’m just using pause. There’s a pause function in our stenographics library, and you just give it a time. You say one second, half a second, and just – I use that a lot in our demos, for example, for the maze, so that you can actually see what’s going on, and that way it just animates a little more. Stenographics, CSTgraph.h. [Inaudible] me show you one more kind of just in the same theme. The idea of taking sort of some, you know, puzzle that somebody might give you, trying to cast it in terms of a decision problem, right, where you make decisions and then you move on, can I solve something like these little cryptographic puzzles? So here’s send plus more equals money. I’ve got eight digits in there – eight letters in there. You know, D E M N O R S Y across all of them, and the goal of this puzzle is to assign these letters – eight letters to the digits zero through nine without replacement so that if D’s assigned to three, it’s assigned to three in all places. For example, O shows up two places in the puzzle. Both of those are either a two or both are three. There’s not one that’s one and one that’s another. And each digit is used once. So, like, if I assigned two to O, then I will not assign two to D. So what we’ve got here is eight letters. We’ve got 10 digits. We want to make an assignment. What we’ve got here is some choices. The choices are, for each letter, what digit are we gonna map it to? Another way to think about it is if you took these eight letters, and then you attach two dashes on the end, then you considered that the letter’s position in the string was – the index of it was its digit. It’s really just like trying the permutations, right, rearranging those letters in any of those sequences. So right now, for example, maybe D is assigned zero, and one, and two, and so on, and these guys are eight, nine. Well, if you rearrange the letters into some other permutation then you’ve made a different assignment. So in effect, right, what you’re looking at is a problem that has a lot in common with permutations, right? I’m gonna make an assignment, take a letter off, decide what index it’s gonna be placed at, so it’s kind of like deciding where it would go in the permutation, and then that leaves us with a smaller form of the problem which has one fewer letters to be assigned, and then recursively explore the options from there to see if we can make something that makes the addition add up correctly that D plus E equals Y means that if D was assigned five, and E was three, then Y better be eight for this to work out. So the first form I’m gonna show you is actually just the dumb, exhaustive recursive backtracking that works very much like the sudoku problem where it just – it finds the next unassigned letter, it assigns it a digit of the next auto sign digits, right, and then just optimistically figures that’ll work out. After it makes an assignment for all the letters it says, take the puzzle, convert all the letters into their digit equivalents, and see if it adds together correctly. If so, we’re done. If not, then let’s backtrack. So let me – I mean, actually, I’ll show you the code first because it’s actually quite easy to take a look at. Again, it has helper routines that kind of try to abstract the pieces of the puzzle that actually aren’t interesting from kind of looking at just the core of the recursive algorithm, so it looks a lot like sudoku in that if letters do assign, so it actually keeps the string of the letters that haven’t yet been assigned. If there are no more letters in that string, we take one off at each time we assign it, then we check and see if the puzzle’s solved. So that kind of does the substitution, and does the math, and comes back saying yes or no. If it worked out, we’ll get a true. If it didn’t work out we get a false. If we still have letters to assign then it goes through the process of making a choice, and that choice is looking at the digits zero through nine if we can assign it. So looking at that first letter in that digit, and then that’s making sure that we don’t already have an assignment for that letter, that we don’t have an assignment for that digit. Sort of make sure that just the constraints of the problem are being met. If we’re able to make that assignment then we go ahead and make a recursive call, having one fewer letters to make a decision for, and if that worked out true, otherwise we do an unassignment and come back around that loop and then eventually the same return false at the bottom, which said, well given the previous assignments of the letters before we got to this call, there was nothing I could do from here to make this work out. So let me show you this guy working because you’re gonna see that it is actually a crazy way to try to solve this problem in terms of what you know about stuff. So I say C S plus U equals fun, a well-known equation. So I did this little animation to try to get you visualized what’s going on at any given stage. So it has the letters down across the bottom, S U N C O Y F. That’s the string of letters to assign. It’s gonna assign it the next available digit when it makes that recursive call, so the first recursive call is gonna be about assigning S, and then the next call – and so on. So it always gets up to seven deep on any particular thing. And so it says, okay, well first digit available is zero, go ahead and assign that to S. So it’s gonna make a call there. And then it says, okay, well look at U. Well we can’t use zero because zero’s already in use. What don’t we assign U to be one? Okay. Sounds good. Keep going. And then it’ll say, okay, let’s get to N. Let’s make an assignment for N. It says I’ll look around. Okay, well zero’s in use, one’s in use, how about two? Okay. Good. And then it assigns this to three, this to four, this to five, and this to six. Okay. It gets to here and it says, hey, does that work, 30 plus 541 equals 612? No? Okay. Well, you know what the problem was? It was F, right? I assigned F to six. How stupid of me. It can’t be six. Let’s make it seven. And then it says, oh, oh no, oh I’m sorry. I’m sorry, 30 plus 541 that’s not 712. How silly. I need to assign F to be eight. And it’s gonna do this for a little while. A long while [inaudible]. And then it says, oh, okay. Well, you know what, given the letters you’d assigned to the first six things, when you got to F, I tried everything I could and there was nothing I could do to fix this. You know what the problem was? It’s not me. It’s not me. I’m not the problem. You’re the problem. So it returns false from this call at the bottom, having tried all its options, which causes Y to say, oh, yeah, yeah, I know. I know I said five. Did I said five? I didn’t meant to say five. I meant to say six. And so it moves up to the six option. Again, optimistically saying that’s good. Go for it. See what you can do. So it picks five. That won’t work, right? It picks seven. It’s gonna go for a long time, right, because it turns out, right, this is one of those cases where that very first decision, that was one of your problems, right? If you assign S to be zero, there’s nothing you can assign U and N to be that are gonna work out. So what it’s gonna be going through is this process though of having, you know, committed to this early decision and kind of moving on it’s gonna try every other variation over here before it gives up on that. So let me set it to going. Even though C S plus U does equal fun. I guarantee it. We’ll let it do some work for a while. So those bars is they grow up are desperation. You can think of that as, like, it’s running out of options. It’s running out of time, you know, and it’s like oh, oh, wait, earlier, back up, back up, And so, okay, you can kind of see how far it’s backed up but sort of how tall some of the deeper recursive calls, right, the earlier ones in the sequence. And so it doesn’t, you know, revisit any of these decisions because it’s really busy over here, but you can see that C is now up to seven. It’ll get up to eight. It’ll get up to nine. And that was when it will cause itself to say, you know, I tried everything that was possible from C on down, and there was no way to make this thing work out. It must be that the bad decision was made earlier. Let’s back up and look at that. And so it’ll come back to the decision N, bump it up by one, and then go again. It’s gonna do this a long time, right? Go through all the options for N and its neighbors before it comes back and revisits the U. It’s gonna have to get U all the way up, right, through having tried one, two, three, four. So adding zero to one, and two, and three, and four, and discovering it can never make a match over there before it will eventually decide that the S is really where we got ourselves in trouble. So this is kind of in its most naïve form, right? The recursive backtracking is doing basically permutations, right? You can see this is actually just permuting the digits as they’re assigned to the numbers, and there are, in this case, you know, seven factorial different ways that we can make this assignment, and there are a lot of them that are wrong, right? It’s not being at all clever about how to pick and choose among which to explore. So in its most naïve form, right, recursive backtracking can be very expensive because often you’re looking at things that have very high branching, and very long depth to them, which can add up to a lot of different things tried. Just using some simple – in this case, heuristics, sort of information you know about the domain can help you to guide your choices instead of just making the choices in order, trying the numbers zero through nine as though they’re equally likely, or kind of waiting to make all the assignments before you look at anything to see if it’s actually good. There’s actually some ways we can kind of shape our decision making to look at the most likely options before we look at these more first. Finding the problem instead of niggling around this dead end. So I’m gonna let that guy work for a little bit while I show you another slide over here. So what the smarter solver does, is that it doesn’t really consider all permutations equally plausible. We’re gonna use a little grade school addition knowledge. If I look at the rightmost column, the least significant digit, I’m gonna assign D first. I assign D to zero, I assign E to one, and then I look at Y — I don’t try Y is five, or seven, or anything like that. I say there’s exactly one thing Y has to be, right, if D is zero and E is one, then Y has to be one, and I can say, well that won’t work because I already used one for E. So it’ll say, well that’s impossible. Something must be wrong early. Rather than keep going on this kind of dumb dead end, it’s to realize right away that one of the decisions I’ve already made, D and E, has gotta be wrong. So it’ll back up to E. It’ll try two, three, four, very quickly realizing that of all the things you could assign to E, once you have assigned D to zero, you’re in trouble. And so it will quickly unmake that decision about D and work its way down. So using the kind of structure of the problem. So it makes it a little bit more housekeeping about where I’m at, and what I’m doing, and what’s going on, but it is using some real smarts about what part of the tree to explore, rather than letting it kind of just go willy nilly across the whole thing. Let me get that out of the way. And I’ll run this one. I say C S plus U equals fun. Okay. So it goes zero here, and tries the one, and it says no, that won’t work. How about the two? No, that won’t work, right, because there’s nothing you can assign the N that’ll make this work. And so it immediately is kind of failing on this, and even after it tries kind of all nine of these, it says none of these are looking good, then it comes back to this decision and says, no, no, actually, S’s a zero. That wasn’t so hot. How about we try S as one? And then kind of works its way further down. Hello? Okay. So let me try this again. Let me get it to just go. Okay. So it took 30 assignments — 30 different things it tried before it was able to come up with 41 plus 582 equals 623, which does add correctly. It didn’t have to unmake once it decided that S was one. It turns out that was a workable solution so it didn’t have to go very far once it made that commitment, and then you can see it kind of working its way up. So 30 assignments, right, across this tree that has, you know, hundreds of thousands in the factorial, very large number, but very quickly kind of pruning down those things that aren’t worth looking at, and so focusing its attention on those things that are more likely to work out using information about the problem. It doesn’t really change what the recursion does. It’s kind of interesting if you think that the same backtracking and recursive strategy’s the same in all these problems, but what you’re trying to do is pick your options, like, looking at the choices you have and trying to decide what are the more likely ones, which ones are not worth trying, right, so sort of directing your decision making from the standpoint of trying to make sure you don’t violate constraints that later will come back to bite you, and things like that. This one’s back here. It’s at 13,000 and working hard. Getting desperate. Doing its thing. Still has not unmade the decision about S is zero, so you can get an idea of how much longer it’s gonna take. We’ll just let it keep going. I’m in no hurry — and let it do its thing. So let me — before I move away from talking about recursion, just try to get you thinking just a little bit about how the patterns we’re seeing, right, are more alike than they are different. That solving a sudoku, solving the eight queens, you know, solving the find an anagram in a sequence of letters, that they all have this general idea of there being these decisions that you’re making, and you’re working your way down to where there’s, you know — fewer and fewer of those decisions until eventually you end up, sort of, okay, I’m done. I’ve made all the decisions I can. What letter goes next, or whether something goes in or out. And the two problems that I call the kind of master or mother problems, right, of permutations and subsets are kind of fundamental to adapting them to these other domains. And so I’m gonna give you a couple examples of some things that come up that actually are just kind of permutations, or subsets, or some variation that you can help me think about a little bit. One that the CS people are fond of is this idea of knapsack feeling. It’s often cast as someone breaking into your house. All criminals at heart apparently in computer science. And you’ve got this sack, and you can put 50 pounds of stuff in it, right, and you’re looking around, you know, at all the stuff that’s up for grabs in this house that you’ve broken into, and you want to try to pack your sack, right, so that you’ve got 50 pounds of the high value stuff. So let’s say there’s, like, 100 things, right, you could pick up, right, and they weigh different amounts, and they’re worth different amounts. What’s the strategy for going about finding the optimal combination of things to stick into your sack, right, so that you got the maximum value from your heist? What problem does that look like that you’ve seen? It’s a subset. It’s a subset. [Inaudible]. Right? So if you look at the, you know, the Wii, and you say, oh, should I take the Wii? You know, it weighs this much, and it’s this much value, well let’s try it in and see what happens, right, and see what else I can stuff in the sack with that, right, and then see how well that works out. I should also try it with it out. So while you’re standing there trying to decide what to steal, you have to type in all the values of things in every computer program. Go through the kind of machinations of well, try this with that because some things are big, but have a lot of value, but they only leave a little bit of odd space left over you might not be able to use well, or something. But what we’re looking for is the optimal combination, the optimal subset. So trying the different subsets tells you how much value and weight you can get in a combination, and then you’re looking for that best value you can get. You’re the traveling salesman. You’ve got 10 cities to visit. Boston, New York, Phoenix, Minneapolis, right? You want to cover them in such a way that you spend the minimal amount of time, you know, in painful air travel across the U.S. Well you certainly don’t want to be going Boston, New York, right, like, Boston, to L.A., to New York, to Seattle, to D.C., to Los Angeles back and forth, right? There’s gotta be a pattern where you kind of visit the close cities and work your way to the far cities to kind of minimize your total distance overall. What problem was that really in disguise? We’ve got 10 cities. What are you trying to do? Help me out a little. I hear it almost. Louder. Permutations. You’ve got a permutation problem there, all right? You got 10 cities. They all have to show up, right? And it’s a permutation. Where do you start, where do you end, where do you go in the middle, right? What sequencing, right, do you need to take? So what you’re looking at is try the permutations, tell me which ones come up short. There are things, right, about heuristics that can help this, right? So the idea that certainly, like, the ones that are closer to you are likely to make a better choice than the longer one. So kind of using some information about the problem can help you decide which ones are more promising avenues to explore. But in the end it’s a permutation problem. I’m trying to divide you guys into fair teams. I want to, you know, divide you up into 10 teams to have a kind of head-to-head programming competition. I happen to know all your Iqs. I don’t know. Some other, you know, useless fact that perhaps we could use as some judge of your worth. And I want to make sure that each time has kind of a fair amount of IQ power in it relative to the others that I didn’t put, you know, all the superstars on one team. What am I doing? How do I divide you guys up? It’s a subset problem, right? So if – in this case it’s a subset that’s not just in or out. It’s like which team am I gonna put you in? So I’ve got 10 subsets to build. But in the end it’s an in out problem that looks like, well, I have the next student. Which of the 10 teams can I try them in? I can try them in each of them, right? So in some sense it’s trying in the first team, the second team, the third team, right, and then pull the next student, try him in those teams, and then see whether I can come up with something that appears to get a balance ratio when I’m done. It turns out you can take the letters from Richard Millhouse Dickson and you can extract and rearrange them to spell the word criminal. I am making no conclusion about anything, having done that, just an observation of fact. But that sort of process, right, I’m trying to find, well what is the longest word that you can extract from a sequence of letters? What kind of things is it using? Anything you’ve seen before? Oh, somebody come on. I hear the whispering but no one wants to just stand up and be committed. How about a little of both, right? The what? Student: Like the sudoku puzzle? Instructor (Julie Zelenski): It’s a little bit like the sudoku, right? It’s got a permutation sort of backing, and it’s got a little bit of a subset backing, right? That is that I’m not choosing all the letters from this. And in fact there’s a subset process, which is deciding whether it’s in or out, and then there’s a permutation problem, which is actually rearranging them, right? That picking the C, and the R, and the I, and the M, and then kind of rearranging them. So in fact it has kind of a little bit of both, right? Which is what sequence of letters can I extract and then rearrange to make a word – would end up using kind of elements of both of those kinds of recursions sort of mixed together so that when you look at a lot of recursion problems, they end up actually just mapping down to one or the other, or maybe a little bit of both. And so feeling very comfortable with those problems, how you turn them into a recursive backtracker, and how you can recognize, right, their roots inside these other problems, it’s kind of a really good first step to becoming kind of a journeyman, in a way, or recursion. So just whenever you see a new problem you start thinking, okay, is it permutations? Is it subset? Or is it something totally new? It’s probably much more likely to be the first two then the third, and so trying to use the ones that you feel really comfortable with to kind of branch out to solve similar problems, right, where the domain is a little different – the structure of the code is still very much the same. All right. I’ve got 10 minutes to teach you about pointers. This should be no problem. Let’s go see what that guy back there’s doing. Oh, look at that, 38,000 assignments, still has not given up on that S is zero. It’s a trooper. Okay. So this is definitely gonna be your first introduction on kind of the first day of talking about this. This is not the only day. So don’t get worried about me trying to do it in 10 minutes. What I’m gonna do is give you the kind of – a little bit of the basics today, and we’re gonna follow up with some more stuff tomorrow where we start kind of making use of them and doing something kind of interesting with them. So there is this notion in C++, which is inherited from the C language in fact, of using a variable type called a pointer as part of the things that you operate on. Okay. People tend to have a lot of trepidation. Just the word pointer causes a little bit of fear, kind of to immediately rise in a new programmer. But hopefully we can demystify a little bit, and then also get a little bit of understanding of why there are reasons to be a little bit wary of working with pointers. A pointer is actually an address. So here’s a couple things we have to kind of think about a little bit how the machine works to have a vision of what’s going on here, that when you declare variables, you know, here I am in main. And I declare, you know, int Num, and string S, that there is space set aside as part of the working memory of this program that is gonna record whatever I assign to Num or S. So if I say Num equals 45, what is that? S equals hello, that there has to be memory that’s being used to hold onto that. And as you declare variables, right, more of the space is set aside, and, you know, when you initialize it, when you read to it, when you write to it, right, that space is being accessed and updated as you go through. When you open a new scope, right, new variables come in a scope. When you close that scope, really that function, that space gets de-allocated. All this happens on your behalf without you actually taking a really active role in it, but in fact you do have to have a little bit of understanding of that to kind of idea of what a pointer’s about, is that there is this just big sequence of data, starting from an address zero. You can think of these things as actually having a little number on them. So maybe this is number 1,000, and this is number 1,004. In this case, assuming that we’re numbering by the byte, which is the smallest chunk of memory we can talk about, and that an integer requires four of those bytes to store all the different patterns for different kinds of numbers so that the bytes from 1,000 to 1,003 are reserved for holding onto the information about 45. And then from 1,000 forward – to however big the string is, which we don’t even really know how big it is, so we’ll kind of leave it a little bit vague here – is gonna be set aside for storing information about that string. Okay. So usually it’s of interest to the compiler, right, to know where things are being stored and what’s going on. But it also turns out to be somewhat useful for us to be able to talk about something by address. Instead of saying the variable whose name is Num, I can talk about the variable who lives at location 1,000. And say there’s an integer, and I can point to it, or refer to it by saying go look at memory address 1,000, and read or write the contents there that are of integer type. Okay. It seems a little bit odd at first. You’re like, well I already have other ways to get to Num. Why is it I want to go out of my way to find another different mechanism that can get me access to that same variable? And we’re gonna see that actually there’s a lot of flexibility that is being bought by us adding this to our feature set. We can say, well, I could actually have one copy of something. So imagine that you have a system, like, an access system where you have student records, and they’re enrolled in a bunch of different courses, that your enrolled in four, five different courses, that when it comes time for a class to know who’s in their list, it might be handy for them, instead of actually having a copy of that student’s information to have a pointer to one student record, and have a lot of different placers where there are additional pointers taken out to that same location and says, go look at the student who’s at address 1,000. I have the students at address 1,000, at 1,026, at, you know, 1,035, whatever those places are, and that that’s one way I can talk about what students I have, and those same student addresses might show up in someone else’s class list in math 51, or physics, you know, 43 or whatever, and that we are all referring to one copy of the student data without duplicating it. So this idea of being able to refer to stuff not just by name, but by where it lives, is gonna give us some flexibility in terms of how we build things out of that. So let me kind of show you a little bit of the basic operations, and then – and I’ll talk a little bit along the way about this thing about why they’re scary because using, you know, memory access as your way to get to something is a little bit more error prone and a little bit harder to deal with than some of the more – other operations we have. So what I’m gonna show you here is a little piece of code. It shows some simple use of pointers. All right. So I’m gonna draw some of the variables that are going on here. This is main and it declared an integer whose name is Num, so I draw a box for that, and it declares two pointer variables. So the addition of the asterisk by the name there says that P and Q are pointers to integers. So P and Q themselves are variables that live in the stack. So all the local variables we say live in the stack. They are automatically allocated and de-allocated when you enter a routine. The space for them comes into being. When you leave it, it goes away, and that P and Q are designed not to hold integers themselves. They don’t hold numbers, but they hold the address of a number somewhere else. And so the first thing I did with P there was to assign it the address of a new integer variable that came out of the heap. So the new operator is like the new operator in Java. It takes the thing you want to create one of, it makes one of those in the heap, and it returns to you its address. In that way it works exactly like in Java. So, in fact, Java actually has pointers despite what anybody ever told you, that the way you create objects and you use new to access them and stuff like that is exactly the same as it is in C++ as it is pointers behind the scene. So I say P gets the value of a new integer. This memory over here is called the heap. So this is not to confuse you with the idea of the stack ADT. We’ve been using the [inaudible] but it does kind of – it helps you to remember that the stack actually kind of, like, by the virtue of the way function calls get made, main calls A, which calls B, which calls C, that that memory actually kind of is laid out like a stack. The heap is just this unordered crazy pile of stuff. I ask for a new integer, so this might be address 1,000. This might be address 1,004. This might be address 1,008. Typically the stack variables are laid out right next to each other. This could be, like, address 32,016 or something over here. Some other larger address. So I’ve assigned P to be that thing. So what I actually gotten written into P’s box is behind the scenes there really is the number, you know, the address, 32,016. What I’m gonna draw is an arrow to remind myself that what it is is it’s a location of an integer stored elsewhere. The de-reference operator, which is the first thing we see on this next line, says to follow P and assign it a 10. So this is taking that address that’s in P, using it as a location to go look up something, and it says, and go write to that location in address 32,016 the number 10. And I said Q equals no int. So Q gets an [inaudible] maybe this is at 23,496. Some other place out there. And so that’s actually kind of what’s being stored here, 23,496. And then I did this thing where I assigned from D referencing P to get an integer, and assign that onto what Q is that has the effect of kind of following P, reading its 10, and then writing it over here. So copying the integers at the end of those two pointers to make them point to the same value. So they point to two different locations, but those locations hold the same integer value that is different than the next line. Without the stars, where I said Q equals P, without the stars on it, it’s saying take the address that’s in P, and assign it to Q, causing Q and P now to be aliases for the same location. So now I have two different ways of getting to that same piece of memory, either by reaching through P and de-referencing, or reaching through Q and de-referencing it – both of them are reading and writing to that same location in the heap, where the 10 is, and then this one down here’s no longer accessible. I sort of lost track of it when I overwrote Q with the copy of P. When I am done in C++ it is my job to delete things that are coming out of the heap. Yes, it should. I’ll take that one at 32,016. Whereas in Java, when you say new, and then you stop using something, it figures it out and it does what’s called garbage collection to kind of clean up behind you. In C++, things that you new out of the heap are your responsibility to delete. So you delete something when you’re done with it. If I’m no longer using this new integer I created in the heap, I say to delete P to allow that memory to be reclaimed. And so that causes this piece of memory to get marked as freed, or reusable, so that a subsequent new call can have that space again and use it for things. I will note that right now, the code as written has a little bit of an error in it because it says delete P. And delete P says we’ll follow out to 32 [inaudible] and mark it as available. The next thing I did was said delete Q. Well Q points to the same place P did. So in fact I’m saying take that piece of freed memory and mark it freed again. There is no real guarantee about whether that’s gonna do something sensible, or whether it’s just gonna ignore me. One thing it could do is just say, well look, that memory’s already freed, so you saying to free it twice is kind of stupid. On the other hand, it could still cause some more problems depending on how the heap allocater works, and there’s no guarantee. So it becomes very [inaudible] on the programmer to be very careful about this matching, that if you make a new call, you make a delete call. And if you already made a delete call, you don’t make another one accidentally. So I really should have that line out of there. The piece of memory that Q originally pointed to, right, I no longer have any way to get to, and so I have no way of making a delete call to it and freeing it. And so this little piece of memory we call an orphan. He’s stranded out there in the heap, no way to get back to it, and C++ will not automatically reclaim it for us. So we have created a little bit of a mess for ourselves. If we did that a lot, right, we could end up clogging our heap filled with these orphans, and have to keep getting new memory because the old memory, right, was not being properly reclaimed. We’re gonna talk more about this, so this is not the all – end all of pointers, but just a little bit to think about today. We’ll come back on Wednesday, talk more about it, and then talk about how we can use this to build linked lists, and that will be fun times for all. [End of audio] Duration: 52 minutes
{"Source-Url": "https://see.stanford.edu/materials/icspacs106b/transcripts/ProgrammingAbstractions-Lecture11.pdf", "len_cl100k_base": 13507, "olmocr-version": "0.1.50", "pdf-total-pages": 17, "total-fallback-pages": 0, "total-input-tokens": 38976, "total-output-tokens": 14240, "length": "2e13", "weborganizer": {"__label__adult": 0.0004184246063232422, "__label__art_design": 0.0004503726959228515, "__label__crime_law": 0.00033354759216308594, "__label__education_jobs": 0.004047393798828125, "__label__entertainment": 0.00010496377944946288, "__label__fashion_beauty": 0.00018787384033203125, "__label__finance_business": 0.00013554096221923828, "__label__food_dining": 0.0005507469177246094, "__label__games": 0.0009336471557617188, "__label__hardware": 0.0013055801391601562, "__label__health": 0.0004291534423828125, "__label__history": 0.000301361083984375, "__label__home_hobbies": 0.00016105175018310547, "__label__industrial": 0.00044798851013183594, "__label__literature": 0.0003113746643066406, "__label__politics": 0.0002815723419189453, "__label__religion": 0.0007443428039550781, "__label__science_tech": 0.00846099853515625, "__label__social_life": 0.0001493692398071289, "__label__software": 0.00452423095703125, "__label__software_dev": 0.97412109375, "__label__sports_fitness": 0.00036716461181640625, "__label__transportation": 0.0006995201110839844, "__label__travel": 0.00030684471130371094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57591, 0.00663]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57591, 0.13682]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57591, 0.95861]], "google_gemma-3-12b-it_contains_pii": [[0, 2735, false], [2735, 5268, null], [5268, 8864, null], [8864, 12994, null], [12994, 17109, null], [17109, 20367, null], [20367, 23488, null], [23488, 27018, null], [27018, 31143, null], [31143, 35235, null], [35235, 39359, null], [39359, 43451, null], [43451, 46499, null], [46499, 49727, null], [49727, 52964, null], [52964, 56181, null], [56181, 57591, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2735, true], [2735, 5268, null], [5268, 8864, null], [8864, 12994, null], [12994, 17109, null], [17109, 20367, null], [20367, 23488, null], [23488, 27018, null], [27018, 31143, null], [31143, 35235, null], [35235, 39359, null], [39359, 43451, null], [43451, 46499, null], [46499, 49727, null], [49727, 52964, null], [52964, 56181, null], [56181, 57591, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57591, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57591, null]], "pdf_page_numbers": [[0, 2735, 1], [2735, 5268, 2], [5268, 8864, 3], [8864, 12994, 4], [12994, 17109, 5], [17109, 20367, 6], [20367, 23488, 7], [23488, 27018, 8], [27018, 31143, 9], [31143, 35235, 10], [35235, 39359, 11], [39359, 43451, 12], [43451, 46499, 13], [46499, 49727, 14], [49727, 52964, 15], [52964, 56181, 16], [56181, 57591, 17]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57591, 0.0]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
555685254bb8cda35549298ae6a32aa024882bd1
Hypothetical Queries in an OLAP Environment Andrey Balmin Yannis Papakonstantinou Thanos Papadimitriou Dept. of Computer Science and Engineering Anderson School of Management Univ. of California, San Diego Univ. of California, Los Angeles fabalmin, yanni sg@cs.ucsd.edu apapadim@anderson.ucla.edu Abstract Analysts and decision-makers use what-if analysis to assess the effects of hypothetical scenarios. What-if analysis is currently supported by spreadsheets and ad-hoc OLAP tools. Unfortunately, the former lack seamless integration with the data and the latter lack "flexibility and performance appropriate for OLAP applications. To tackle these problems we developed the Sesame system, which models an hypothetical scenario as a list of hypothetical modifications on the warehouse views and fact data. We provide formal scenario syntax and semantics, which extend view update semantics for accommodating the special requirements of OLAP. We focus on query algebra operators suitable for performing spreadsheet-style computations. Then we present Sesame's optimizer and its cornerstone substitution and rewriting mechanisms. Substitution enables lazy evaluation of the hypothetical updates. The substitution module delivers orders-of-magnitude optimizations in cooperation with the rewriter that uses knowledge of arithmetic, relational, "nancial and other operators. Finally we discuss the challenges that the size of the scenario specifications and the arbitrary nature of the operators pose to the rewriter. We present a rewriter that employs the \minterms" and \packed forests" techniques to quickly produce plans. We experimentally evaluate the rewriter and the overall system. 1 Introduction Recently the database community has developed data warehousing and OLAP systems where a business analyst can obtain online answers to complex decision support queries on very large databases. A particularly common and very important decision support process is what-if analysis, which has applications in marketing, production planning, and other areas. Typically the analyst formulates a possible business scenario that derives an hypothetical \world" which he consequently explores by querying and navigation. What-if analysis is used to forecast future performance under a set of assumptions related to past data. It also enables the evaluation of past performance and the estimation of the opportunity cost taken by not following alternative policies in the past [PC95]. For example, an analyst of a brokerage company may want to investigate what would be the consequences on the return and volatility of the customers' portfolios if during the last three years the brokerage had recommended the buying of Intel stock over Motorola. According to his scenario he (hypothetically) eliminates many Motorola buy orders that the customers had actually issued, introduces Intel share orders of equivalent dollar value, and recomputes the derived data. Subsequently, he investigates the results of this hypothesis on specific customer categories. More hypothetical modifications and queries will follow as the analyst follows a particular trail of thought. Spreadsheets or existing OLAP tools are currently used to support such what-if analysis. Surprisingly, despite its importance, what-if analysis is not efficiently supported by either one. Spreadsheets offer a large number of powerful array manipulation functions and an interactive environment that is suitable for specifying changes and reviewing their effects online. However, they lack storage capacity, the functionality of DB query languages, and seamless integration with the data warehouse; once the data has been exported to This work was supported by the NSF-IRI 9712239 grant, UCSD startup funds, the Onassis Foundation, and equipment donations from Intel Corp. Permission to copy without fee all or part of this material is granted provided that the copies are not made or distributed for direct commercial advantage, the VLDB copyright notice and the title of the publication and its date appear, and notice is given that copying is by permission of the Very Large Data Base Endowment. To copy otherwise, or to republish, requires a fee and/or special permission from the Endowment. Proceedings of the 26th VLDB Conference, Cairo, Egypt, 2000. the spreadsheet it becomes disconnected from updates that happen in the data warehouse. OLAP systems offering what-if analysis [CCS] lack the analytical capabilities of spreadsheets and their performance is orders of magnitude worse than what can be achieved by intelligent scenario evaluations, such as the ones delivered by our Sesame prototype. To further understand the limitations of current OLAP tools let us walk through a typical implementation of the what-if analysis example above. First an experienced user or the data warehouse’s administrator designs a ‘scenario’ datacube and develops a script (eg, see [CCS] for a scripting language) that populates the scenario datacube with the data corresponding to the hypothetical world developed by the scenario. Consequently the cross-tabs (sums) and other views are recomputed. Apparently the creation of the scenario datacube cannot be an online activity. After the scenario is materialized the analyst will issue queries, drill-down and roll-up [GMUW99] into parts of the hypothetical world. At this point it becomes evident that materializing the full hypothetical world (and hence delaying query submission by as much as a day) may have been an unnecessary overhead. Consider the following two cases where the conventional methodology underperforms. We comment on how Sesame handles such cases. First, queries and drill-downs on detailed data will typically retrieve only a small part of the hypothetical world. (After all, there is only so much real estate in a monitor.) For example, a query that investigates the consequences of the scenario on the portfolios of the first 50 investors does not have to materialize anything more than the hypothetical portfolios of the specific investors. Indeed, Sesame won’t even materialize the hypothetical portfolios; it will simply retrieve the actual portfolios, it will remove the Motorola orders and will dynamically introduce in the result Intel orders of equal dollar value. Second, queries that retrieve various aggregate measures, such as the SUM, can leverage the corresponding aggregate measures of the ‘actual’ datacube. For example, Sesame will compute the hypothetical current value \( V^q(x) \) of the portfolio of customer \( x \) as follows. \[ V^q(x) = V(x) + s_d(O(x; m; d)(T[m] \times P[m; d])) + s_d(P[l; m; d](O(x; m; d)(T[l] \times P[l; d])) \] where \( i \) stands for Intel, \( m \) for Motorola, \( V^q(x) \) is the hypothetical value of the portfolio of customer \( x \) and \( V(x) \) is the actual value. The array entry \( O(x; y; d) \) stands for the actual number of \( y \) shares bought (or sold if the number is negative) by customer \( x \) on day \( d \), and \( P[y; d] \) stands for the (closing) price of shares of \( y \) on day \( d \). \( T[y] \) stands for the current value of \( y \). According to the above the hypothetical value of a portfolio is computed by adding to the portfolio’s actual value the profit by each hypothetical investment in Intel and subtracting the profit of each investment in Motorola. One may actually update the orders table and then propagate the updates, possibly using one of the efficient update propagation techniques suggested by the database community [BLT86, GMS93, RKR97, LYG M99, MQM97]. However, Sesame’s no-actual-update policy has the advantage that no backtracking of updates is needed after scenario evaluation is over nor it is anymore necessary to lock the hypothetically updated parts. Technical Challenges and Contributions First, we formally define scenarios as ordered sets of hypothetical modifications on the fact tables or the derived views of the warehouse. As usual, modifications on views may be satisfied by multiple possible fact table modifications. We extend prior work [AVY96] on the semantics of select-project-join (SPJ) view updates by introducing the notion of ‘minimally modified database’, which is necessary for having reasonable semantics in warehouses involving non-SPJ operators, such as aggregation and arithmetic. Second, we developed an extensible system where arbitrary algebraic array operators can be used. Using the extensible algebra machinery we introduce operators that combine spreadsheet and database functionality. In this paper we present the join arithmetic family of operators. More operators (moving windows and operators for metadata handling) can be found in the extended version [BPP]. Expressions involving the novel operators are optimized by providing to the rewriting optimizer appropriate rewriting rules. Our most important contribution is Sesame’s scenario evaluation, which is based on substitution and rewriting. Given a scenario \( s \), a query \( q \) on the hypothetical database, and information on the warehouse’s views, the substitution module delivers a query \( q^0 \) that is evaluated on the actual warehouse and is equivalent to the result of evaluating \( q \) on the hypothetical database created by \( s \). Then the rewriter optimizes the query \( q^0 \). In the spirit of conventional optimizers it pushes selections down and it eliminates parts of \( q^0 \) that do not affect the result (such parts typically correspond to ‘irrelevant’ hypothetical modifications.) It also rewrites the query \( q^0 \) in order to leverage on the warehouse’s precomputed views. We identify and provide solutions to two major rewriting challenges. First, the query expression \( q^0 \) is typically very large, as a result of the potentially large number of hypothetical modifications. The good news is that \( q^0 \) has a particular structure, which is exploited by Sesame’s minterm optimization. Second, rewriting queries using views, while non-conventional operators are involved in the algebra, is a novel challenge that has not been considered by extensible rewriters [HFLP89] (they have not considered views) or by the ‘rewriting using views’ literature, which has focused on conjunctive queries [LRS95] or conjunctive with SQL’s aggregation operators [SD] L96, CNS99. We present the packed forests extension to System-R-style optimizers that allows the development of rewriters that trade the rewriter’s running time with the generality of rewriting axioms, queries, and materialized views for which they can deliver the optimal result. Finally we incorporate Sesame as an add-on component to an SQL Server that stores the warehouse and provides the query processing engine for evaluating the optimized scenario/query. 1.1 Related Work To the best of our knowledge what-if scenarios in an OLAP environment have not been addressed by the database research community. Our work brings together a multitude of concepts and techniques such as substitution, extensible rewriting optimizers, view updates and incomplete data, and logical access path schemas (see below). [GH97] presents an equational theory for relational queries involving hypothetical modifications and discusses its use in an optimizer that may choose between lazy and eager evaluation. The substitution step of our rewriter extends the lazy evaluation idea of [GH97] by considering an environment including views as well. However, the optimization and rewriting problem is much more challenging in Sesame’s case. The specification of the repercussion of a hypothetical modification on the constituents of a view is intertwined by works on the semantics of view updates ([AHV96] provides an overview.) The critical difference from the prior work is the introduction of the “minimally modified data graph” concept and the corresponding replacement of “sure” answers. The difference is just the intuitive requirement that base relation tuples that do not contribute to modified view tuples should remain sure and non-modified. Not surprisingly, our definition of sure and the conventional definition of [AHV96] coincide when we focus on SPJ queries, which have been the focus of prior work, but diverge when we consider aggregate, arithmetic, and moving window functions. The data graph schema, which helps us rewrite queries using views, inherits from the LAP schemas [SRN90] the idea of guiding the rewriting optimizer by a graph indicating how the views are connected to each other. However, LAP schemas have dealt with SPJ queries only and this makes the rewriter described in [SRN90] much simpler than Sesame’s. The next section introduces the framework, syntax and semantics used. Section 3 describes the architecture and algorithms involved in Sesame. 2 Framework We first present the datagraph model, which is our abstraction of warehouses and datacubes and extends... views in a warehouse system and the edge labels correspond to the view definitions. Notice however that, in the same spirit with the lattice model [HRU96] and logical access paths [SRN90], multiple hyperedges may be leading to the same node view, hence encoding multiple ways in which the node view can be derived. The hyperedges assist substitution and rewriting (see Section 3). Each node \( v \) is populated with a bag of tuples \( S(v) \), called the state of \( v \). Similarly to relational algebra, each Sesame algebra expression \( (v_1, \ldots; v_m) \), whether it is a hyperedge label or a query, is a mapping \( E \) that given the input nodes' states \( S(v_1), \ldots; S(v_m) \) it produces an output bag \( E(S(v_1), \ldots; S(v_m)) \). The states of the nodes must be such that they satisfy the hyperedge label expressions. Formally, a valid datagraph state (or simply datagraph from now on) is an assignment of a state \( S(v) \) to each node \( v \) of the datagraph schema such that for every hyperedge \( f v_1, \ldots; v_m g \), \( v \) is \( S(v) = E(S(v_1), \ldots; S(v_m)) \). From now on we will omit mentioning \( S \) explicitly, whenever the context makes clear that we refer to states as opposed to schemas. The datagraph schema must be consistent, in the sense that alternative ways to compute a view have to yield the same result. Definition 1. The set of transitive hyperedges \( T \) of a datagraph schema is computed as follows: 1. for every node \( v \), \( T \) contains \( v \)^{\circ} v, 2. if the datagraph schema contains the edge \( f v_1, \ldots; v_m g \), \( v \) and \( T \) contains the edges \( v^e_i \); \( i = 1; \ldots; m \), then \( T \) also contains the edge \( \left[ i=1: \ldots; m \right] v^e_i v \), where \( e^h \) is the expression created by substituting each \( v^e_i \) in \( e \) with \( v \). Given a transitive hyperedge \( f v_1, \ldots; v_m g \), \( v \) we will say that \( v_i \) is an ancestor of \( v_j \) (for every \( i \)) and, vice versa, \( v_j \) is a descendant of \( v_i \). Example 2.1 Figure 1 illustrates a brokerage house's datagraph that will serve as the running example. A tuple \((c; t; d; s)\) in the fact node \( \text{OrderCTDS} \) (Customer, Ticker, Date, Shares) indicates that customer \( c \) bought \( s \) shares of the stock with ticker symbol \( t \) on date \( d \). If \( s \) has a negative value it indicates selling of shares. For brevity we are writing only the relation name corresponding to the node and, by convention, the capital letters at the relation names' suffix will stand for the initials of the attributes names. The fact node \( \text{PriceTDV} \) (Ticker, Date, Value) has tuples \((t; d; v)\) that stand for the closing price \( v \) of stock \( t \) on date \( d \). The current positions node \( \text{PositionCTS} \) is derived from \( \text{OrderCTDS} \) by the hyperedge \( f \text{OrdersCTDS} \) \( \delta \) \( \text{PositionCTS} \). The operator \( \delta \text{Date} \) (which adapts the summation operator of [GMUW99] to one-measure tables) outputs all dimension attributes of the input except Date. For each output tuple \((c; t; d; s)\) the measure \( s \) is the sum \( s_1 + \ldots + s_n \), where the \( s_i \)'s are the measures of the set of tuples \( f(c; t; d_1; s_1); \ldots; (c; t; d_n; s_n) g \) that consists of all input tuples where Customer = \( c \) and Ticker = \( t \). In general, \( \delta \) may have multiple parameters, e.g., \( \delta \text{Date; Ticker} \). See [BPP] for a complete definition of \( \delta \) as well as all the operators in the current implementation of Sesame. For brevity we are going to represent attributes by their first letter only and we may not include the full operand names in the edge expression whenever it is obvious from the context. The hyperedge \( \text{OrderCTDS} \) \( \delta \) \( \text{PositionHistCTDS} \) declares that the position history is the running sum of orders according to date \( (D) \). In particular, \( \text{PositionHistCTDS} \) contains the tuple \((c; t; d; s)\) if \( f(c; t; d_1; s_1); \ldots; (c; t; d_n; s_n) g \) is the set of all \( \text{OrderCTDS} \) tuples such that \( d_1 \cdot d_2 \cdot \ldots \cdot d_n \) and \( s = s_1 + \ldots + s_n \). Of course, it is necessary that the attribute parameter(s) of \( \delta \) are of an ordered type. The hyperedge \( \text{fPositionHistCTDS}; \text{PriceTDV} \) \( g \) \( \delta \text{ValueHistCTDS} \) indicates that \( \text{ValueHistCTDS} \), the history of the dollar value each customer held in each stock each day, may be derived by multiplying the stock prices with the position history. Finally as an example of datagraph consistency, observe that \( \text{ValueCTV} \), which is the current dollar value each customer holds in each stock, may be derived in two ways, corresponding to the hyperedges \( A \) and \( B \) of Figure 1, from \( \text{OrderCTDS} \) and \( \text{PriceTDV} \). The first one is the expression \( \pi \text{OrderCTDS} \times \pi \text{PriceTDV} \) which first computes the current positions of the customer and then multiplies them with the current stock market prices (depicted by arrow type \( A \) of Figure 1). The second one is the expression \( \pi \text{PriceTDV} \times \text{OrderCTDS} \pi \text{PriceTDV} \) which first computes the dollar value history for each customer, stock and date (see above) and then selects today’s data (depicted by arrow type \( A \) of Figure 1). The datagraph is consistent because the two expressions always deliver the same result. 2.1 Novel Operators in Sesame Sesame is based on an algebra where arbitrary operators can be included as long as their input and output is one-measure bags of tuples (see Section 2). Besides select, project, semijoin, union, difference and the aggregate operators sum, min, max, avg and count, we have also included the novel join arithmetic family of operators, presented below. Our operators appropriately merge the relational framework of Sesame with array algebras and spreadsheet-style operations. They lead to expressions that are much more concise than relational algebra expressions that are extended with generalized projections [GMUW99] that accomplish arithmetic operations. The conciseness greatly facil- itates the development of rewriting rules and speeds up the rewriter, which has to deal with smaller ex- pressions. Join Arithmetic Operators The join arithmetic operators +; 0; ; = and +; 0; ; = take two operands, let us call them the left(D1;:::;Dk;:::;Dn;Mi) and the right(D1;:::;Dk;:::;Dn;Mj). The dimension attributes of right must be a subset of left). The result relation has schema Result(D1;:::;Dk;:::;Dn;Measure). The semijoin family +; 0; ; = or the outerjoin sub-family +; 0; ; =. The Semijoin Family For every pair of tuples left(d1;:::;dk;:::;dl;ml) and right(d1;:::;dk;:::;dl;mr) the result has a tuple Result(d1;:::;dk;:::;dl;mr) where * is one of the four operators +; 0; ; =. Note that the without-superscript 0 and = are \semijoin" op- Outerjoin Family The outerjoin family is defined only when the two operands have identical lists of dimension attributes. For every pair of tuples left(d1;:::;dk;:::;dl;ml) and right(d1;:::;dk;:::;dl;mr) the result contains the tuple Result(d1;:::;dk;:::;dl;ml) for every tuple left(d1;:::;dk;:::;dl;ml) with no matching tuple the tuple appears as is in the re- result and so do tuples of right with no matching left tuples. The no-superscript + is an outerjoin operator, Notice that, though the result relation name is by default \Result" and the result measure is \Measure" we may rename them to whatever we like by using the renaming operator ½. If the operator is used in the datagraph schema then we will omit the ½ using the convention that the relation name and measure name that have already been given to the view will override \Result" and \Measure". Based on the above and the special relation a = f(a)g which has no dimensions and its single tuple has measure a, we define the following four \macro" op- 2Division by 0 raises an exception. Our \implicit join" approach simplifies the expres- sion of array computations and simplifies the axioms and rewriting rules which involve arithmetic (see Ap- 2.2 Scenarios A scenario is a set of ordered hypothetical modi- cation on a datagraph D. The \rst modi- cation results in a hypothetical datagraph D1. The second modi- cation uses the state of datagraph D1 and produces a new hypothetical datagraph D2, and so on. Eventu- al a query is evaluated on the last hypothetical datagraph. The following example illustrates the syn- tax and semantics of scenarios. OrderC \( \begin{align*} \text{OrderC} A^1 \to \& > > 0 \& \text{an}15:97^\circ = \text{Intel\;MULT}_1:2\text{OrderC} D^1 \to \text{OrderC} A^2 \to \text{OrderC} D^2 \end{align*} \) The three modifications above roughly correspond to an update, a delete, and an insert. The \rst one states that a hypothetical datagraph D1 is created and its OrderC D node must be the result of updating the fragment \( \& > > 0 \& \text{an}15:97^\circ = \text{Intel\;MULT}_1:2\) with OrderC D node. Notice the select-modify operator \& that is used for accomplishing the \rst modi- cation. The function of \& is to (i) select the tuples satisfying the subscript condition and apply to them the subscript operator and (ii) union the result with the remaining tuples of the input node. Hence, \& f = f(\& R) \& \text{\&} R The hypothetical modi- cation will be reverberated to all the nodes of the graph D. For example, the PositionsC D node will reflect a 20% larger position in Intel. Intuitively D3 is produced by having OrderC D node We now formalize the semantics of a scenario s on a datagraph G. For uniformity we'll be referring to the actual datagraph G as G. The notation e(V0;:::;m) denotes an expression e whose arguments are nodes of G;G1;:::;Gn. Definition 2 assumes that the first i − 1 datagraphs are known and uses the i-th modification of s to derive the i-th hypothetical datagraph. Definition 3 specifies the induction that defines G' from G. Note in the following definition that the hypothetical datagraph is not an arbitrary datagraph that satisfies the modification and the edge expressions; in addition, it will have to be in agreement with all minimally changed datagraphs. The intuition behind this definition is illustrated in Example 2.2. Definition 2 Consider the datagraphs G[0];G[1];:::;G[i] and a modification v; A e(V [0];:::;i 1). The hypothetical datagraph G[i] meets the following properties: 1. For every node v of G[0] there is a node v of G[i] with identical structure, namely a superscript i on the relation name. For every edge v ; v of G[0] there is a corresponding edge v ; v of G[i]. 2. S(v) = e(S(v;:::;i 1)) 3. Each node v of G[i] contains the intersection \v \ of the corresponding nodes v; v;:::;v of all minimally modified datagraphs M[1];:::;M[i]. A datagraph M[i] is called minimal if there is no i such that there is a node v of L[i] which corresponds to the nodes v of M[i] and v of G[i] such that v; v of G[i] and v of M[i] is the modification of G[i] and v of G[i] and v of M[i]. (i.e., you cannot cancel any tuples' insertion or deletion in a minimally changed datagraph and still have a valid modified datagraph that meets conditions 1 and 2.) Definition 3 A hypothetical datagraph G[k] given the scenario s is a datagraph such that there is a sequence of datagraphs G[1];:::;G[k] such that G[i] is a hypothetical datagraph of G[0];:::;G[i] given the modification v; A e(V[i]1), for each i = 1;:::;k. We denote by G(G[s]) the set of all hypothetical datagraphs given a scenario s and a datagraph G. Note the following two points which are illustrated in Example 2.2. First, there is no guarantee on the number of hypothetical datagraphs. Second, not all modified datagraphs are hypothetical according to our definition. Example 2.2 Consider the hypothetical modification PositionCTS[1] A fi " Intel" M ULT, PositionCTS[0] that hypothetically increases by 20% the customer holdings on Intel. There are more than one hypothetical datagraphs because there are multiple ways to derive an OrderCTDS[1] state such that the sum of the OrderCTDS[1] Intel tuples will be increased by 20%. There are modified datagraphs that satisfy the modification but affect irrelevant data. For example, there are datagraphs that lead to the same PositionCTS[1] but they update non-Intel tuples as well. We believe that such datagraphs should not be considered valid hypothetical datagraphs. We exclude them from the set of hypothetical datagraphs by placing the third condition in Definition 2. Finally note that we do not restrict valid hypothetical datagraphs to those that are minimal after modification. For example, a valid hypothetical datagraph for the running example is one that increments every Intel order by 20%. However, such a datagraph is not minimal. The only minimal datagraphs are those that assign the full increase of the Intel position to a single order. We believe that being restricted to minimal datagraphs would unnecessarily disqualify meaningful hypothetical datagraphs. If a modification is applied on a node with no incoming edge, say the OrderCTDS of Figure 2, and the edge expression operators are total then there is exactly one hypothetical model. The result of a query or, more general, the result of an expression (say, the expression that is used on the right side of an assignment) is comprised of a sure and a non-sure part as defined below. Definition 4 (Sure Expressions) Given a datagraph schema G and a scenario s, consisting of modifications, the expression e(V[m]) is sure if for every state of G the result of evaluating e(V[m]) on every hypothetical datagraph in the set G(G[s]) is identical.3 It is interesting to note the difference of our definition of "sure" with the one used in [AHV96] for the definition of updating a select-project-join view. The latter one does not use "minimality of changes" and this makes it inappropriate in an OLAP environment with arithmetic and aggregate operators. For example, according to the definition of [AHV96] the updating of a fragment of a sum aggregate node makes the whole source node unsure. 3 Sesame's Algorithms, Implementation and Performance Results The Sesame system is the middle layer in the 3-tier OLAP architecture of Figure 2. The warehouse is actually stored in a relational database { currently Microsoft's SQL Server. On the client side there is a user interface that creates the scenarios and hypothetical queries that are sent to Sesame. A simple GUI is available at 3Note that according to the above definition | and according to Sesame, which follows the above definition | the 'sureness' of an expression depends only on the datagraph schema and not on the specific datagraph state. This decision is justified by obvious implementation considerations. Next, the rewriter makes the following transformations: \[ \frac{3}{7} \cdot \text{John} \cdot \text{ValueCTV}^1 \] The substitution module will combine the scenario and the query into the following dereferenced query. The specific steps are explained in Section 3.1 and Example 3.3. \[ \frac{3}{7} \cdot \text{John} \cdot ((\frac{2}{7} = \text{MSFT} \cdot \text{Mult}_{1:1} \cdot \text{PositionCTS}^0) \cdot \text{PriceTodayTV}^V) \] Next, the rewriter makes the following transformations: \[ \frac{3}{7} \cdot \text{John} \cdot ((\frac{2}{7} = \text{MSFT} \cdot \text{Mult}_{1:1} \cdot \text{PositionCTS}^0) \cdot \text{PriceTodayTV}^V) \] At this point the query processor has achieved two goals: (I) It has expressed the query in terms of actual, stored relations. (II) It has optimized the expression by pushing selections down the query tree and by using the appropriate materialized views. In particular it has used the ValueCTV \(0\) \{ as opposed to the PositionCTS \(0\). Finally, Sesame's execution engine treats the expression produced by the rewriter as an execution plan. The engine traverses the plan tree bottom-up. When it locates a subtree that corresponds to a single SQL statement \(c\) it sends \(c\) to the SQL server. Consequently the server creates and stores the result table \(r\) of \(c\) and the engine replaces the subtree \(t\) with the table \(r\). However, many Sesame operators cannot be reduced to SQL (e.g., moving windows and nancials). For each operator of this kind Sesame has a stored procedure written in Microsoft's Transact-SQL, which has the full power of a programming language. Each procedure implements the functionality of a specific Sesame operator. Note that all processing is done at the SQL Server and no data is moved between Sesame's execution engine and the SQL Server. Only the final result passes through the engine, before it is sent to the client. **Example 3.2** The engine will translate the plan produced by the rewriter in the Example 3.1 into the SQL query ``` SELECT C, T, (V * 1.1) AS V FROM ValueCTV WHERE T = "MSFT" AND C = "John" UNION SELECT * FROM ValueCTV WHERE T != "MSFT" AND C = "John" ``` For the sake of the example, let us assume that SQL does not have a multiplication operator. Then the engine will execute the plan by issuing the following three commands to the SQL Server: 1. SELECT * INTO #Tmp1 FROM ValueCTV WHERE T = "MSFT" AND C = "John" (creates #Tmp1 = \(\frac{3}{7} = \text{MSFT} \cdot \text{AND}(C = \text{John})\) ValueCTV) 2. Run a Transact-SQL procedure that creates a #Tmp2 where \(V\) is multiplied by 1.1. 3. SELECT * FROM ValueCTV WHERE T != "MSFT" AND C = "John" UN ON SELECT * FROM #Tmp2 In many real-world situations, substitution and rewriting are not as simple or as fast as the few steps of --- We have not yet separated the notions of logical and physical plan [GMU99] mainly because the physical work is passed to the SQL server. Note also that in order to improve the performance the intermediate tables are stored in a special temporary database which is kept in the main memory. the Example 3.1 suggest. In the general case they both reduce to combinatorial problems. We have sped up substitution by focusing our algorithms on the class of structurally sure scenario queries. For this class substitution is polynomial in the size of the query and the datagraph. Then we present a series of rewriters that address the performance challenges that are special to what-if scenarios. Section 3.1 describes the substitution step. Section 3.2 gives an overview of a straightforward rewriting algorithm and its performance problems in non-trivial scenarios. Section 3.3 introduces the minterms replacement for efficiently rewriting scenarios with multiple select-modifications. Section 3.4 describes the packed forest rewriter. Section 3.5 provides experimental results. 3.1 Substitution The substitution module receives (i) a datagraph D⁰, (ii) a scenario s illustrated in (SQ5) that produces an hypothetical datagraph Dⁿ and (iii) a query q = e_q(V^n) on Dⁿ. The module derives a query q⁰ that (1) uses exclusively the nodes of the original datagraph D⁰, and (2) when evaluated on D⁰ it returns the same answer that q returns when it is evaluated on the datagraph Dⁿ. We will call q⁰ the dereferenced query. \[ \begin{align*} V^n &\equiv A e(V^n) \\ V^m &\equiv A e_e(V^m) \\ V &\equiv A e(V) \\ \end{align*} \] (SQ5) The implemented substitution module works for the class of structurally sure scenario-queries, which are guaranteed to be sure (as defined in Section 2). Structural sureness leads to a very efficient substitution algorithm, because it depends on the graph structure of the datagraph schema and scenario modifications, but not on the datagraph’s edge expressions and the related axioms. Given a datagraph D⁰ and the scenario-query (SQ5) the following nodes of D⁰;õ;Dⁿ are structurally sure. For each structurally sure node v we also provide a set of expressions C(v) that compute v using D⁰ nodes exclusively. Initial Nodes Every node V⁰ is structurally sure. For each V⁰ it is \[ C(V^0) = f v^0 g \] Directly Modified Nodes If the nodes V^0;õ;V^m are used in the i-th modification are structurally sure then the directly modified node V^m is also structurally sure. The set of expressions that compute V^m is constructed by applying the modification expression e on each expression e^i that computes the corresponding node V^i, i.e., \[ C(V^m) = f e(e^i)g + 2(C(V^i)g) \] Unmodified Nodes If the node V is not a descendant of a node V^i that was modified in the i-th step of the scenario then V is also structurally sure. One can easily see that such nodes v are left unmodified by the i-th modification. Hence \[ C(v) = C(V^i) \] Indirectly Modified Nodes If there is a hyperedge A^i ½ V and all of the nodes A^i are structurally sure then V is also structurally sure. In general, there are many ways in which we can compute V. For example, given the hyperedge labeled by e and given expressions e^i that compute each of the A^i one expression that computes V is derived by substituting each instance of A^i in e with the corresponding e^i. However there may be many hyperedges leading to V and each source node A^i of the hyperedge may be computed by multiple expressions (i.e., C(A^i) will typically have more than one expressions.) Hence C(V) is the following set. \[ C(V) = f e(a^i V e^i;õ;e^m V e^m) g \] where the notation e=(a_1 V e_1;õ;a_m V e_m) stands for the substitution of each a^i; j = 1;õ;m in e with e^i. Finally, a scenario-query is structurally sure if every node in the node set V^n, which is used by the query, is structurally sure. It is easy to see that the query can be computed by any expression of the set \[ C_q = f e_q(V^n V e^n;õ;V^n V e^n) g \] The implemented algorithm computes the C sets top-down unlike the above definitions that hint a bottom-up algorithm. The top-down derivation computes fewer C sets than the bottom-up one, because the bottom-up one computes C sets even for the nodes that are irrelevant to the query. EXAMPLE 3.3 Consider (again) the modification and the query of Example 3.1. The substitution algorithm first locates a transitive hyperedge that leads to ValueC TV^1 and contains only directly modified and unmodified nodes. Such a transitive edge is fPositionCTS^1;PriceTodayTV^1; ValueCTV\(^1\) since PositionCTS\(^1\) is directly modified and PriceTodayTV\(^1\) is unmodified. Now we can replace the query with: \[ 3c_j = \text{ohn}(\text{PositionCTS}^1 \bowtie \text{PriceTodayTV}^1) \] Then PositionCTS\(^1\) is replaced by the right hand side of the hypothetical assignment. PriceTodayTV\(^1\) is replaced by PriceTodayTV because it is \text{unmodified}. Hence, we end up with the dereferenced query: \[ 3c_j = \text{ohn}\left(\left[\frac{18}{10} \cdot \text{MSFT} \cdot \text{MULT}_{110k} \cdot \text{PositionCTS}^1\right] \bowtie \text{PriceTodayTV}^1\right) \] 3.2 Sesame’s Rewriters This section describes the challenges that arise during the rewriting of dereferenced queries and the solutions developed for Sesame’s rewrite. The variety of operators, datagraphs and scenario queries that have to be considered during query rewriting, prompted us to rst develop the ultra-conservative rewrite that exhaustively searches the space of plans. We configured this rewrite with a set of 9 operators, formally defined in [BPP] and the 15 rewriting rules listed in [BPP]. Although for a small set of inputs the ultra-conservative algorithm might perform reasonably well, in the general case its running time is very poor. An exponential blowup was observed, resulting in poor performance for queries with more than four select-modifications. The poor performance of the ultra-conservative algorithm is due to challenges that relate to the structure and size of dereferenced queries. We describe next the challenges along with the solutions that Sesame’s rewrite gives. 3.3 Exponentiality in the number of Select-modifications and the Minterms Solution The rst challenge is the exponential size of the dereferenced query after replacing each select modification \( \gamma_{i,j,f_1,R} \) with \( f_1 \frac{1}{3} R \bowtie \frac{1}{3} C_R \). For example, the expression \[ \gamma_{i_1,f_1} \gamma_{i_2,f_2} \gamma_{i_3,f_3} R \] is rewritten as: \[ f_1 \frac{1}{3} e_2 f_2 \frac{1}{3} e_3 f_3 \frac{1}{3} c_1 \] where \( f_1 \frac{1}{3} e_2 f_2 \frac{1}{3} e_3 f_3 \frac{1}{3} c_1 \) is simply an ordered list of the \( i_1 \) and \( i_2 \) points (i.e., \( C_1 \cdot C_2 \cdots \cdot C_n \)). One may wonder whether considering common subexpressions could lead to a faster rewrite that would optimize each common subexpression just once. The shortcoming of this approach is that the modifying functions \( f_1, f_2 \) and \( f_3 \) above will make each of the two copies of the common subexpression interact differently with the rest of the expression and hence it will become impossible to optimize the common subexpression just once. Sesame’s rewrite, provides an efficient solution to this problem by identifying the minterms of \( R \). A minterm is a set of tuples on which exactly the same modifying functions are applied. Identifying minterms in a query that involves select-modifications allows the rewrite to remove the exponentiality in the number of select-modifications; instead, the result is exponential only in the number of dimensions referenced in the selections of the query. The minterms technique can be applied in the case of scenarios where: 1. The conditions of the select-modifications do not involve measure attributes. 2. The modifying functions in the select-modifications are commutable with selection and union operators. Though the above requirements seem strict, they are quite common. Indeed, modifying functions consisting of arithmetic operators, which we believe are predominant in what-if practice, meet the above conditions. Now consider the following scenario/query, which is amenable to the minterms technique because the modifying functions commute with selection and union and the conditions are of the form \( A_2 \) range, or \( A = q \) where \( A \) is a dimension. For simplicity let us consider equality conditions as a special case of range conditions. \[ \begin{align*} \text{scenario} & \quad V^1 \hat{A} \gamma_{A[2]\{i;u_1\};i_1} V \\ & \quad V^2 \hat{A} \gamma_{A[2]\{i;u_2\};i_2} V \\ & \quad \vdots \\ & \quad V^n \hat{A} \gamma_{A[2]\{i;n\};i_n} V^{n_1} \\ \text{query} & \quad e_3(V^n) \end{align*} \] The dereferenced query for the above is \[ e_3(\gamma_{A[2]\{i;n\};i_n} V^n) e_1(\gamma_{A[2]\{i;u_1\};i_1} V) (Q6) \] Using the minterm technique this scenario query can be rewritten into the minterm form \[ e_3(\gamma_{A[2]\{i;u_1\};i_1} V) e_1(\gamma_{A[2]\{i;u_2\};i_2} V) \ldots e_n(\gamma_{A[2]\{i;n\};i_n} V)\] (Q7) where the points \( C_1; \cdots; C_{2n} \) are simply an ordered list of the \( i_1 \) and \( i_2 \) points (i.e., \( C_1 \cdots \cdot C_{2n} \)). \( e_j \) is \( e_i \) if the range \( [i_1; u_i] \) covers the range \( [C_j; c_j] \) and it is the identity function otherwise (i.e., it can be omitted as well.) EXAMPLE 3.4 The expression \[ \prod_{2} \gamma_{A[2]\{i=10;1=25\}} V \cdot \gamma_{A[2]\{i=15;1=25\}} V \\ \prod_{2} \gamma_{A[2]\{i=10;1=25\}} V \cdot \gamma_{A[2]\{i=15;1=25\}} V \] OrderCTDS reduces to the following after the select modifications are removed using the minterm technique \[ \frac{3}{2} \cdot 2^{1} \cdot 8 \cdot 1 = 60 = 10 \cdot 60 | \text{Mult}_1 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 8 \cdot 1 = 60 = 15 \cdot 20 | \text{Mult}_2 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 25 = 20 | \text{Mult}_2 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 25 = 20 | \text{Mult}_3 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 30 = 15 | \text{Mult}_3 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 30 = 15 | \text{Mult}_3 \text{Order CTDS} \] \[ \frac{3}{2} \cdot 2^{1} \cdot 60 = 30 | \text{Order CTDS} \] Note that the above minterm form is linear in the number of select-modifications (as opposed to exponential). We can generalize the above transformation to one where the conditions involve d dimensions. In this case the number of minterms (i.e., the number of operands in the above union) will be less than \((2n+1)=d^2\). A polynomial time algorithm that performs the above transformation is in \([\text{BPP}]\). 3.4 Multi-Operand Operators Challenge and the Packed Forests’ Solution The second challenge arises when the rewritter optimizes unions and other multi-operand operators. In this case, the rewritter produces an exponential number of equivalent expressions. EXA M PLE 3.5 Assume that the operators a and b are commutative. Then, given the expression \(a(b(R))\) \(\rightarrow (a(b(S)))\) the rewritter will also derive \(a(b(R)))\) \(\rightarrow (a(b(S)), b(a(R)))\) \(\rightarrow (b(a(R)), a(b(S)))\). System-R style optimizers resolve this problem by optimizing each branch of the union separately, i.e., by employing local optimization (called dynamic programming in the context of System-R.) However, the local optimization algorithms may miss the opportunity to use a materialized view. The following example illustrates the problem. EXA M PLE 3.6 Consider the dereferenced query \(\text{Avg}_C(\text{Mult}_1 \text{Order CTDS}) \neq \text{Count}_C(\text{Order CTDS})\) against a datagraph containing the views \[ V_1 = \text{Mult}_1 \text{Order CTDS} \] \[ V_2 = \text{Avg}_C(\text{Order CTDS}) \] If the optimizer processed each operand of the multiplication operator separately, it would arrive to \(\text{Mult}_1 V_2 \neq \text{Count}_C(\text{Order CTDS})\) and would not be able to reach the optimal \(\text{Mult}_1 V_1\). Packed Forests The above example demonstrates that local optimization may miss the optimal rewriting. Our rewritter tackles the problem by employing the packed forests data structure, which efficiently stores all equivalent function buildForest(query q, rules R, datagraph D) returns forest F for every hyperedge \(v_1; \ldots; v_n, g^R v_d\) insert \(v_1; \ldots; v_n\) ! \(v_d\) in R Queue A \([a]\) insert the node q in F while Queue is not empty remove from Queue its \(i^{th}\) element \(d^0\) for every rule \(r \in R\) if \(r; \text{match}(d^0) = \text{true}\) and returns the set of bindings B for every binding b from the set B - generate new tree \(t = r; \text{rewrite}(b)\) - traverse t's non-forested part bottom-up, applying buildForest() to every node. - if t is not already in F insert t in Queue insert the node t in F Figure 3: Packed Forests Optimizer plans for each subexpression \(\{\) as opposed to System-R optimizers, which only note the optimal plan and discard the rest. Technically, a packed forest is a data structure that can encode in a compact way a class of equivalent expressions. A forest of an expression \(E\) is a set of all expressions equivalent to \(E\). A packed forest of \(E\) is a forest in which every subtree of each expression is also a forest. Packed forests have been used to save space in parsing of natural languages \([\text{RN95}]\). To illustrate how packed forests are used to improve efficiency of the query rewriting let us reconsider the union expression of Example 3.5. The packed forest of this expression is \(f(a(b(R))),(b(a(R))),(f(a(b(S))))\). Notice that if the union had \(n\) operands and the packed forest of each one had two equivalent expressions the packed forest encoding would require space linear in \(n\) while it represents \(2^n\) equivalent expressions. The packed forest rewriting algorithm shown in Figure 3 creates the packed forest of a given query. Let us illustrate this algorithm with the rewriting of the query: \(\text{Avg}_C(\%v \text{ear} = 1998; \text{Mult}_1 \text{Order CTDS})\) In the \(\text{rst}\) step (see Figure 4), the initial tree is traversed bottom up starting at \(\%v \text{ear} = 1998\) and a forest is built out of each non-leaf node. In Figure 4 dotted circles indicate the roots of the subtrees for which the buildForest() is called, and solid boxes indicate completed forests. Since no rules match any of the nodes, until the rewritter reaches the root node \(\text{Mult}_1 \text{Order CTDS}\), every forest contains exactly one tree (step 3 in the “Parse”). At this point the rule \(\text{Mult}_1 \text{Order CTDS} \rightarrow (\text{Mult}_1 \text{Order CTDS})\) res and adds the second tree to the forest that is being built (step 3). Note, that the new tree already has forests built for \(\text{Mult}_1 \text{Order CTDS}\). Figure 4: Example of Packed Forest Optimization trees where copied from the original expression without modifications. Next, the buildForest() function is called for every non-forested child of \[ i.e., \] both its children. It starts with the left \[ P \]. This instance of buildForest() uses the \[ \text{P} \) \text{A} \text{M} \text{ul} \text{t} \text{k} \text{M} \text{ul} \text{t} \text{k} \text{P} \text{A} \text{rewriting and produces the expression} \], \[ T_1 = \text{M} \text{l} \text{t} \text{k} \text{2} \text{C} \text{S} \text{T}_\text{ear} = 1998 \text{CST} \]. Then it recursively calls buildForest() on \[ T_1 \text{ (step 4)} \]. The rest of the forest is produced in the similar fashion. By default, Sesarne's rewriting rules use only the local optimum plan of each subexpression, thus being almost as fast as local optimization algorithms. However, specially written rules spend extra time to scan (not only the local optimum but also) the equivalent subexpressions and hence nd the optimal rewriting. In our current system implementation only the rule \[ \text{Average} \text{R} \text{x} \text{Count} \text{R} \text{C} \text{R} \text{is implemented in this fashion.} \] The match() function of this rule looks at the roots of all trees in the operand forests, selecting Sum's in the \text{first operand and Count's in the second. Then pairs of Sum and Count with the same operands and parameters should be identified, and bindings be produced for each of those pairs.} Packed forests greatly reduce the amount of space required by the rewiter and allow us to trade the rewiter running time with the complexity of rewritings it can do. 3.5 Experiment results This section presents two sets of experiments. First, we evaluate the running time of an optimizer that employs the techniques described in Sections 3.2, 3.3, and 3.4 on the performance of the rewiter. Second, we evaluate Sesarne's overall performance in comparison with recomputation and incremental update policies in a conventional data warehouse. The data presented in this section were obtained on the same Pentium II \( 333 \text{MHz, Windows NT,} \text{JDK1.3 with Hotspot} \text{Java Virtual Machine} \) on a machine where the data for the ultra conservative rewiter were obtained. In all cases the rewiter was set up with the datagraph schema of Figure 1. The same set of rewriting rules listed in [BPP] was used. Rewriter Running Time Experiments In this section we evaluate a rewiter employing minterms and the packed forest technique. We do not show results for rewriters without these two techniques, for their performance is non-competitive. For our experiments we report only the running time of the rewiter and not the number of produced plans, because the number of produced expressions is linear with respect to the running time (see [BPP]). For the experiments of Figures 5 and 6 the scenario consists of \[ N = 1; \ldots; 10 \text{modifications} \] of the form \[ \text{OrderCTDS} = \frac{\text{A}_1,MUL_{C_1} \text{OrderCTDS}}{1} \] where \[ A_i \] were conditions on the dimensions \( T \) and \( C \). The \text{rst query was} \[ \frac{\text{S}_e, \text{PositionCTSN}}{C_5} \text{where} \text{S}_e \text{was a condition on the T dimension. The second query was} \frac{\text{S}_e, \text{ValueCTV}N}{3} \text{Thus the dereferenced queries are of the form:} \] \[ \text{Date} \frac{\text{A}_1,MUL_{C_1} \ldots \text{A}_n,MUL_{C_n} \text{OrderCTDS}}{1} \] \[ \text{Date} \frac{\text{A}_1,MUL_{C_1} \ldots \text{A}_n,MUL_{C_n} \text{OrderCTDS}}{1} \text{PriceToday TV} \] Figures 5 and 6 present how the rewiter's running time increases as a function of the number of modifications. Overall Performance Experiments In conclusion we present an experiment in which the same hypothetical query \( \frac{\text{S}_e, \text{PositionCTSN}}{N = 1; \ldots; 4} \text{is the number of modifications in the scenario} \) that was used for the rewriting experiment, was Table 1: Overall performance vs. the MS SQL Server executed by Sesame’s execution engine and by Microsoft SQL Server. Since Sesame’s rewriter can optimize this query to be answered entirely using the original materialized view PositionCST, Sesame’s lazy evaluation approach has huge advantage over the eager execution one, as Table 1 clearly demonstrate. <table> <thead> <tr> <th>Modifications</th> <th>Sesame Exec. time</th> <th>Incremental Exec. time</th> <th>Replaced Exec. Time</th> <th>Affected tuples</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>0.25 sec</td> <td>102 sec</td> <td>630 sec</td> <td>151 K</td> </tr> <tr> <td>2</td> <td>0.9 sec</td> <td>225 sec</td> <td>630 sec</td> <td>168 K</td> </tr> <tr> <td>3</td> <td>1.1 sec</td> <td>289 sec</td> <td>630 sec</td> <td>249 K</td> </tr> <tr> <td>4</td> <td>1.0 sec</td> <td>298 sec</td> <td>630 sec</td> <td>257 K</td> </tr> </tbody> </table> The second column indicates the time that it took Sesame’s execution engine to carry out the optimized dereferenced plan. The third column reflects the time that it took the MS SQL Server to update the fact nodes and relevant views according to the scenario, execute the hypothetical query and roll back the modifications. This result is equal to the time this scenario would take in a warehouse system that supports incremental updates, i.e., the time to create the delta tables for OrderCTDS and PositionCST, run the query and destroy the deltas. The fourth column reflects the time that it took the MS SQL Server to execute the query without the simulated incremental updates. In this case the hypothetical database was created, all the data was copied from the original fact tables along with the necessary modifications, all the views were recomputed, and the query was executed on the hypothetical database. The data warehouse used for this experiment contained only one million orders or about 50 MB of data. In a more realistically sized warehouse, Sesame’s advantage would be even more striking. References
{"Source-Url": "http://www.db.ucsd.edu/static/vldb/all.old.pdf", "len_cl100k_base": 13433, "olmocr-version": "0.1.53", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 54574, "total-output-tokens": 15247, "length": "2e13", "weborganizer": {"__label__adult": 0.00037217140197753906, "__label__art_design": 0.000492095947265625, "__label__crime_law": 0.0005860328674316406, "__label__education_jobs": 0.00247955322265625, "__label__entertainment": 0.0001386404037475586, "__label__fashion_beauty": 0.0002357959747314453, "__label__finance_business": 0.00394439697265625, "__label__food_dining": 0.00049591064453125, "__label__games": 0.0007700920104980469, "__label__hardware": 0.000980377197265625, "__label__health": 0.0006532669067382812, "__label__history": 0.0004658699035644531, "__label__home_hobbies": 0.00016200542449951172, "__label__industrial": 0.00119781494140625, "__label__literature": 0.0004112720489501953, "__label__politics": 0.0004761219024658203, "__label__religion": 0.00044345855712890625, "__label__science_tech": 0.18994140625, "__label__social_life": 0.00012576580047607422, "__label__software": 0.05474853515625, "__label__software_dev": 0.73974609375, "__label__sports_fitness": 0.00024819374084472656, "__label__transportation": 0.00081634521484375, "__label__travel": 0.00030541419982910156}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54057, 0.02159]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54057, 0.45342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54057, 0.85992]], "google_gemma-3-12b-it_contains_pii": [[0, 4321, false], [4321, 10309, null], [10309, 12934, null], [12934, 19136, null], [19136, 22871, null], [22871, 27893, null], [27893, 31008, null], [31008, 35299, null], [35299, 40323, null], [40323, 45589, null], [45589, 49559, null], [49559, 54057, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4321, true], [4321, 10309, null], [10309, 12934, null], [12934, 19136, null], [19136, 22871, null], [22871, 27893, null], [27893, 31008, null], [31008, 35299, null], [35299, 40323, null], [40323, 45589, null], [45589, 49559, null], [49559, 54057, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54057, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54057, null]], "pdf_page_numbers": [[0, 4321, 1], [4321, 10309, 2], [10309, 12934, 3], [12934, 19136, 4], [19136, 22871, 5], [22871, 27893, 6], [27893, 31008, 7], [31008, 35299, 8], [35299, 40323, 9], [40323, 45589, 10], [45589, 49559, 11], [49559, 54057, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54057, 0.01609]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
7063ea6d2cc5b24172f72b6494b9ad65987a7bfa
Kalle Rindell, Informaatioteknologian laitos, University of Turku, Turku, Finland Sami Hyrynsalmi, Tampere University of Technology, Pori, Finland Ville Leppänen, Informaatioteknologian laitos, University of Turku, Turku, Finland ABSTRACT Security concerns are increasingly guiding both the design and processes of software-intensive product development. In certain environments, the development of the product requires special security arrangements for development processes, product release, maintenance and hosting, and specific security-oriented processes and governance. Integrating the security engineering processes into agile development methods can have the effect of mitigating the agile methods’ intended benefits. This article describes a case of a large ICT service provider building a secure identity management system for a sizable government agency. The project was a subject to strict security regulations due to the end product’s critical role. The project was a multi-team, multi-site, standard-regulated security engineering and development work executed following the Scrum framework. The study reports the difficulties in combining security engineering with agile development, provides propositions to enhance Scrum for security engineering activities. Also, an evaluation of the effects of the security work on project cost presented. KEYWORDS Agile, Case Study, Infrastructure, Scrum, Security, Standard, VAHTI 1. INTRODUCTION Security regulations are an important driver in various aspects of software development and information systems and services. Even in the cases when formal security standards or guidelines are not strictly required, the drive for security still guides the selection of design patterns and technological components, as well as the design and development work. Increasing diversity in development methods, technology, and the environments where the systems are used, have prompted organizations to follow various security standards, as well as created the need to establish new ones to guarantee adequate security assurance. In 2001, the government of Finland begun to issue a set of security regulations, called VAHTI instructions. Compliance with the instructions is now mandatory for all government agencies, and the regulation is also applied to any information system and data connected to a VAHTI-classified system. While the importance and use of security regulations has increased, the use of lightweight software development processes and methods, i.e., agile development, has become the de facto standard in the industry (VersionOne, 2016). While there exists a series of suggested methods how to conduct security engineering activities in an agile project (see e.g. Alnatheer, Gravel & Argles, 2010; Baca & Carlsson, 2011; Beznosov & Kruchten, 2004; Fitzgerald, Stol & Sullivan, 2013; Ge, Paige, Polack & Brooke, 2007; Pietikäinen & Röning, 2014; Rindell, Hyrynsalmi & Leppänen, 2015), the empiric evidence is still largely anecdotal and the cases reported specific to an industry or a single company. The study reported in this paper is exploratory, and thus the research, by its nature, explorative. This study reports the experiences in agile development in a security regulated environment. The research objective (RO) is: **RO:** Identify best practices as well as hindrances of using agile software development methodologies in security engineering. The results contribute to the on-going discussion by being a result of a deep analysis of combining security engineering with an agile method in an industry setting. Furthermore, the result of this study pave the way for further work deepening our understanding on the benefits and drawbacks of using agile software development methodologies in security sensitive development work. In the case described, a Scrum project was conducted with the objective of building an IDM system for VAHTI-compliant information systems, and a secure VAHTI-compliant server platform to host the systems, including the IDM. The server platform was to be used also to host software development projects (with certain dispensations). The project was executed during 2014 and 2015, and had a duration of 12 months. The development team was split into two to three geographically dispersed groups, with the actual amount of teams involved dependent on the tasks at hand and the overall phase of the project. As a standing practice with the government agency that initiated the building of the platform, the project was managed using unmodified “textbook version” of Scrum. This called for strict adherence to fixed-length sprints, well-communicated product and sprint backlogs and daily progress monitoring by the Product Owner and steering group. The project was under strict control of the Project Management Office, and schedules of related infrastructure and software development projects were depending on the results of this project. Compliance with VAHTI was a central objective of the project. In addition to VAHTI, the client agency had also their own additional security demands, as well as recommendations from other government agencies, most importantly the National Cyber Security Centre’s (NCSA-FI)². The server platform to be built was to be acceptable for use for all government agencies, as well as private companies or organizations requiring similar level of VAHTI compliance. This paper presents how Scrum was applied for the security-related work required in the project, and how the project was conducted. As the study revealed that not all the objectives of using ‘pure’ Scrum were not met, suggestions are made to improve the efficiency of the development work by introducing rudimentary security engineering extensions to the Scrum framework. The modifications include a new role for a security developer, and also suggest specific security sprints and other security-oriented additions to the run-of-the-mill Scrum. We also discuss how the introduction of the security engineering activities into the project affect cost, efficiency and the conduct of the project. 2. BACKGROUND AND MOTIVATION The use of agile methods has become an industry practice, whereas the security standards regulating software development processes, such as ISO/IEC 21817 (2008) and ISO/IEC 27002 (2013) originate in the time preceding the agile methods. Based on the literature, and also the findings this observed case, the typical approach to agile security engineering is to simply start using the methodology at hand without formal adjustments, with the notable exception of thorough and formal approach to security engineering described by Baca & Carlsson (2011) and Fitzgerald & al. (2013). There are even well-documented cases of attempts to achieve formal ISO/IEC capability maturity level incorporating agile methods, such as Díaz, Garbajosa & Calvo-Manzano (2009). Unfortunately, the findings and suggestions made in these studies were not directly applicable in a project that was not strictly restricted to software development. Instead, a more *ad hoc* approach was used. In this approach, the security-related tasks are treated simply as items in the backlog: the security requirement items are converted to tasks, given story points, and completed among the other items as best seen fit. When security items which cannot reasonably be time-boxed because of the inherent uncertainties of the work, or the inexperience of the team, they separated from the Scrum sprint cycle and completed in non-time-boxed spikes. While the ad hoc method may succeed in achieving “minimum viable security” by complying with the formal requirements, it is hardly the most effective way to achieve the goals, nor does it provide the best security assurance for the end product. Achieving security assurance is by all means possible with careful planning, although lacking in proper security requirement management and security task pre-planning. Absence of these elements in the project management methodology tend to lead to inefficiencies and, consequently, delays and increased development costs. Lack of proper security assurance may also increase the amount and severity of the residual security risk during the software system’s life span. Our argument is that by adjusting the Scrum methodology to better align with security engineering tasks, the security cost overhead can be reduced while the security of the end product is enhanced, when compared to traditional sequential security engineering practices. This is achieved by incorporating the security processes into Scrum activities, as opposed to treating them merely as items in the backlog, by introducing new security-oriented roles into the development team. By incorporating the security engineering activities into the development method, the full benefit of incremental agile methods can be utilized to achieve better efficiency ratio and, arguably, better end products. The next subsections provide more information about VAHTI, and the use of Scrum methodology in development projects requiring security standards compliance. Due to similarities in the requirements, the same observations and recommendations we make in this paper are found applicable also to software safety regulations, in e.g. medical field. ### 2.1. Security-Augmented Scrum Scrum is a generic framework, originally intended to manage software development projects with small co-located teams. Scrum suggests that the product to be completed is divided into smaller components, or features, based on the customer requirements. These requirements are represented by user stories, which are then translated into product features by the development team. Features are then further divided into work packages or items, which are compiled into a product backlog. Items in the product backlog are completed in an incremental and iterative manner during short-term development sprints. The team, consisting of the Scrum Master, the Developers, and the Product Owner as customer’s representative, determines the items to be completed during the next 2-4 weeks sprint, consisting of daily scrums. After the sprint, the work is demonstrated, and optionally the team performs self-assessment of the past sprint in a retrospect event. In this representation the Scrum process is augmented by three major extensions, presented in Figure 1. 1. The role of a security developer. The security developer, or developers, focus on the security of the product, and typically create or review the documentation required to pass the security audits. 2. Security assurance provided by creating security artifacts, mostly security-related documentation. They consist of security training certificates required from the project team, but most importantly the architecture documentation, risk management plans, test plans, test reports, system’s log files and other evidence required by the security auditor. The audits also produce reports, which are part of the security assurance provided for the customer. 3. Anticipation of and planning for security-related tasks. To better illustrate this aspect of security work, security engineering activities are presented as iterative tasks in the sprint cycle in addition to the daily scrum. It should be noted that not all sprints may have all the security tasks, and if the organization decides to perform security-oriented security sprints, the daily scrum may entirely consist of security activities. In a project using unmodified Scrum, such as the one used in this case, the security testing, reviews and audits are viewed as normal stories in the sprint backlog and executed as part of the daily scrum. In this view the security tests and audits are part of the product, as compliance with security standards and regulations is mandatory during development time. The main shortcoming is the difficulty or outright inability to estimate the amount of work involved in the security activities, which merits for giving them special treatment. By emphasizing the importance and special role of the security stories, compared to treating them as overhead and extra burden, is prospected to produce better results with higher efficiency. In effect, this will reduce the cost of the development work. 2.2. Security Regulations and Standards Applied VAHTI is an open and free collection of the Finnish government’s security guidelines, published on the Internet since 2001. The aim of this regulatory framework is to promote and enforce organizational information security, risk management and overall security competence of various government agencies, and harmonize security practices throughout the organizations. As of spring 2016, the collection comprises of 52 documents. The following VAHTI instructions were found to be relevant for this project: - VAHTI 2b/2012 “Requirements for ICT Contingency Planning”, FMoF (2012) Of these, only the document 2b/2012 is available in English. The other relevant documents are made available in Finnish, and their English titles translated for the purpose of this article. This also applies to much of the VAHTI terminology: official English translations may not exist, may be inconsistent between documents or may change over time. As a curious example, the Finnish name of VAHTI board itself has changed recently, albeit the English translation has not. In addition to the VAHTI requirements, the company responsible for building the platform is audited for compliance with ISO/IEC standards 9001, 27001, 27002, and 21817, as well as its own extensive management framework which it makes available for its clients for review. The company has functions in the United States, so also Sarbanes-Oxley (SOX) act applied. SOX is mostly concerned with the financial elements of the project, but still affected the work load of the Scrum Master by adding reporting responsibilities. 2.3. Data Security in VAHTI VAHTI classifies the information systems into three security levels: basic, increased and high. The server platform, where the IDM system was installed, was built for the increased security level. Information contained in the systems is classified into levels from I to IV, where IV is the lowest. Information contained in a system audited for increased security level may contain cleartext information up to level III; in this case, however, all data was encrypted despite the official classification level. 2.4. About Security Engineering The term `security engineering’ in software industry comprises of all security-related tasks within a software-intensive product’s life cycle. The standard’s way to categorize these activities is to divide them into three main process areas: risk, engineering and assurance (see ISO/IEC 21817). The risk process assesses the risk and aims in minimizing it by assessing threats and vulnerabilities, and the impact they have, producing risk information. The security engineering process uses this information with other security-related input to define security needs and provides solutions to fill them. Assurance process collects and produces evidence of security’s existence, and aims in its verification and validation. The ultimate goal of these processes is to identify and mitigate risk, and define the impact and actions to be taken when the residual or unrecognized risk is realized: what will happen when things break. 3. Research Process This study follows a case study design method by Yin (2003), and a qualitative research approach by Cresswell (2003). For the study, we were looking for a development project that was both using agile methods and fulfilling VAHTI regulations. Our decision was to focus on the VAHTI regulations, as they are viewed to be a national standard and, therefore the number of possible cases would be higher. In addition, we were looking for a project which would be either ended or near its ending in order that we would be able to evaluate the success of the used model. Finally, the selected case should be a representative candidate as well as be able to produce rich information about the phenomenon under study. We ended up to select a project case where identity management and verification service was ordered by a governmental customer who required the use of VAHTI. The development work was done by following a modified version of Scrum software development method. As Scrum is currently one of the most used development methods, the findings from this case study should be representative. The project was executed by a mature, well-known software product development and consultancy company in Finland. The company has a long history of both agile methods as well as producing information systems for the government. By the wish of the company, the client and the interviewees, all participants to the project shall remain anonymous. For this study, we held a post-implementation group interview for the key personnel of the selected project. We used a semi-structured interview approach where time was given to the interviewees to elaborate their thoughts about the phenomenon under study. The general questions concerned the scope and size of the project, amount of the personnel involved, and the daily routines of the team. Also, the security standards that were applied to the project were gathered. The security mechanisms developed to implement the requirements were charted, along with how they were presented to the client and auditors. Finally, the amount of extra work caused by the security requirements was discussed and roughly estimated, and the interviewees recounted their views of the lessons learned in the project. The interview session also acted as a retrospective for the whole project, where the participants were able to express their views of positive and negative aspects of the project and the effect the security requirements had. The results of the interview were then analyzed by the researchers and the key observations were emphasized. The project was selected as a potential research target due to its strict security requirement and the fact that it was executed and managed using Scrum framework. The interviewees were the Scrum Master and the head architect of the project. They were both deemed as key personnel of the project, and they were able to provide insight to the project background, its execution as well as its results. The selected interviewees were also the only ones that persistently participated in all of the sprints and were involved in the project for its whole duration. The questions posed before the interviewees were divided into three groups. First three questions concerned the project background (Q1-Q3); following five questions concentrated on the project process, security standards, and feedback on the Scrum and security (Q4-Q8); and the final two questions canvassed the interviewees’ views on the project results and success factors (Q9-Q10). The questions were as follows: - Q1: Project subject and scope? - Q2: Project resources, budget, and duration? - Q3: Personnel locations, multi-site teams? - Q4: What VAHTI standards were followed? - Q5: What other security standards and regulation were included? - Q6: Other restrictions (safety, privacy, agency specific regulations)? - Q7: What types of steps were taken to enforce them? - Q8: How was the security assurance verified (audited) and audit trail maintained? - Q9: Did the budget and schedule hold, and what was the amount of extra work caused by security? - Q10: What were the lessons learned? After the interview, some complementary questions were asked via emails to confirm certain details, but otherwise the initial interview session was deemed sufficient for the purpose of this study. Access to exact budget or workload figures, or system logs or other technical documentation was not made available for research: the security classification of the platform prevented using this data even for verification. Instead, the interviewees relied on their personal experience and notes made during the project, and provided best estimates on the matters in a general level accepted for publication. 4. CASE STUDY: THE PROJECT The agency required a VAHTI compliant IDM platform for their various information systems, and for users and system administration and management purposes. The platform was to be built using off-the-shelf components, installed on common open source operating systems, and deployed onto a large scalable array of virtual servers. A similar IDM platform was built also to authenticate and manage the identities of the administrators who manage other VAHTI compliant servers and services, and is to be separately instantiated for regular office users as well based on the experience and solutions gained in this project. The IDM was deemed to be a critical service in respect of agency’s security, privacy and business requirements: while the agency had 650 internal users connecting to 450 separate server-side computer systems, they also manage a sizable array of contractors with a total user amount of up to 12,000. The building project was conducted at the same time the server platform itself was being built, which added to the challenge in such way that all the requirements of VAHTI were to be met by a novel implementation. Nearly all the design and definition work was to be completed in this project. To add to the challenge, the work was to be performed using Scrum, mainly to ensure steering group’s visibility to the project’s progress, and also to enable reacting to any unexpected obstacles or hindrances met during the project execution. Unfortunately for the project team, the customer also saw use of Scrum as a method to change the project’s scope during its execution by adding items to the product backlog, or removing them from there, which caused certain degree of confusion among the team and forced it to abandon some work already completed. These aspects of Scrum projects, however, are not a security issue but of a more generic field of project management, and therefore are not further discussed. The development work consists of distinct phases, which were completed during one or more iterations: 1. **Definition**: synthesis of the requirements, component candidate selection, risk assessment and analysis. 2. **Design**: architecture design, definition of interfaces, component hardening plans. 3. **Development**: component research, modification (i.e., hardening), and installation. 4. **Testing, Reviews, Audits and Acceptance**: security testing, external audits and formal acceptance of the end product to be a part of the agency’s system portfolio. In effect, security assurance processes. As there were no formal milestones preset at the beginning of the project, the security gates, such as audits, were passed flexibly whenever each feature was considered to be mature enough. This removed certain amount of unnecessary overhead, as a traditional fixed milestone dates may call for the team to work overtime, which may get costly due to pay compensations and cause delays to other projects due to resource shortage. ### 4.1. Project Organization The project involved an average of nine persons at any given time: a Scrum Master, a dedicated Product Owner, a Security Architect (in basic scrum, part of the development team in the role of a developer), and the developers split into their production teams based on location and occupation. The service provider is a devout follower of ITIL³, a well-established and recognized set of industry standard best practices for IT service management. Typically for an ITIL-oriented organization, the infrastructure production teams reside in their own “silos”, with very little communication with other teams. Production teams were divided by their specialization, in this case "Storage and Backup", "Server Hardware", "Windows Operating Systems", "Linux Operating Systems", "UNIX Operating Systems", "Databases" and "Networks". In addition, the IDM application specialists came from their own team, residing within a separate unit within the company. The project brought specialists from these various teams together, at least virtually --- for at least the daily 15-minute stand-up meeting. Due to team’s multiple physically separated locations, the meetings were without exception held as telephone conferences. The teams were utilized in different phases of the project in such way that only the Scrum Master, security developer (i.e., the architect) and the Product Owner had personal activities in every single sprint throughout the project. The developers were part of a larger resource pool, and drawn into the sprints or spikes in various phases of the project whenever their expertise was required. 4.2. Project Execution Much of the work related to VAHTI regulations was done in the planning phase: it turned out that the client agency had compiled their own list of requirements, which was based on VAHTI but had new security elements added to the public requirements. This partially compensated the dropping of the specific requirements for VAHTI compliant application development (FMoF, 2013) in the beginning of the project. The project extended over a period of 12 months, from planning phase to accepted delivery of final sprint. The amount of work was measured in story points, and the average velocity of each sprint was 43 points. Divided with the average number of the developers (9) and the length of the sprint (15 work days) gives a rough estimate of a story point equaling three work days. As an overall measure, the story points give an impression of the size of the tasks. This sort of conversion may not be meaningful in general and outside of the scope of a single project, as the story points are primarily used to compare the features (or stories) to each other within a single project. For purposes of this study, the fact that largest single units of security work, the hardenings, were not performed in sprints and therefore not measured in story points, makes pinpointing the cost of security work much harder. In this case, the interviewees’ estimates were the only source of the amount of workload, and although trusted to be reliable, exact figures would have been preferred. From the beginning, the team’s approach to the security tasks was pragmatic, although in terms of Scrum, rudimentary: stories that were found difficult to time-box at the time of their implementation, were taken out of the sprint cycle and completed as spikes. Prime examples of such tasks were operating system hardenings, a task essential for the platform security: the project team allocated resources to these tasks, and just ran them as long as the tasks took. This resulted in a project structure presented in Figure 2, where there were major side tracks to the main sprint cycle. As tasks such as these were in the very core of the project goals, it would have been beneficial to go through the trouble or even adjust the Scrum structure to better accommodate these items. The sprints are represented as the main story line. The parallel lines represent the spikes that were executed outside the main sprint structure. Their results (deliverables) were demonstrated at a sprint demo, although they were executed independently without time-boxing. There were three distinct task types outside the sprint structure: 1. **System hardenings**, performed for each tier or environment of the system under development: Development, Quality Assurance (QA), and Production environments. The results obtained in the Development phase were not directly usable for the upper environments, whereas the QA environment was built to be production-like. As a result, the work done at QA phase was partly reusable at Production phase. Despite the technical similarities, the ITIL-guided maintenance models of these two environments were so great that the team proceeded in executing the Production environment hardenings as a spike as well. 2. **Documentation** was a ubiquitous process during the development. This included risk management, technical architecture and technical component documentation, test plans and reports. Documentation comprised most of the security assurance. Complete list of VAHTI requirements for documentation are presented in Appendix 3 of the VAHTI instruction 3/2012. In this document, there are 224 mandatory requirements listed for the increased security level information systems. Almost all of these requirements call for some type of written evidence to be verified and reviewed, although most of the documentation artefacts are created in other than the development phase of the information system’s life cycle. 3. Reviews and audit were performed based on the documentation and included physical testing of implementation. 4.2.1. Product Deployment Model The demand for increased security (literally, the “increased level” on VAHTI security classification) also stated how the systems were deployed: to maintain audit trail, all changes to the production environment, including all server and hardware installations during its buildup, were performed following ITIL processes. These processes added extra levels of bureaucracy, and the team reported getting acceptance from the Change Advisory Board (CAB) for all changes to be made in the production environment had a very adverse effect on the deployment schedules. Combined with the policy of role separation between developers and maintenance personnel, this caused the building and installation of the production environment to be document-driven, bureaucratic and slow. The policy of separating the roles of developers and maintenance personnel effectively prevents the `DevOps` type of continuous delivery maintenance model, and would require e.g. a form of “continuous security” model, such as presented by Fitzgerald & Stol (2014). In this project, the continuous delivery model was used with the lower environments, speeding the rate of delivery significantly. When building the production environment, the flow of work assumed in previous sprints was disrupted, which caused unnecessary slowness and also cost overhead. Documentation necessary for the maintenance personnel was to be created before the handover, and as such did not necessarily contain all the required information and details. Mandatory use of ITIL processes when building the production environment was one of the main schedule hindrances of the project according to the interviewees. 4.2.2. Team Structure and Tools Depending on the items in the current sprint backlog, the team was divided in two or three geographically separated locations during the whole length of the project. The organizational separation of the developers resulted in situation, where even the persons based on the same location did not necessarily sit in the vicinity of each other or communicate with other team members directly. The central location for the project, and the physical location of the server platform was Helsinki, Finland, but the team members were divided on several sites. The Scrum Master performed most of her duties remotely, without being in direct contact with the developers except rarely. As usual in large ICT service companies, almost all developers were also involved in other projects at the same time. The overall experience of the team was deemed very high, although in infrastructure work the use of agile methods is not very common, and is customer dependent at best. As per this fact, most personnel were mostly inexperienced with Scrum, although they received basic Scrum training before and during the project. Use of Scrum was reflected by the use of collaboration and project management tools, most importantly Atlassian JIRA specifically customized for the agency’s use. The Scrum Master promoted and demanded the use of JIRA as reflecting the work performed in daily sprints. The Product Owner’s most visible role was following the project’s progress based on what team members reported on this tool. In general, the team was reported to be happy or at least content with Scrum, at least up until the production environment building phase where ITIL processes broke the team’s workflow. 4.2.3. Security Story Types and Their Implementation The requirements called primarily for well-documented software quality and component and process security. Most of the additional work was directly security related, and creating its documentation. The platform also had strict and formal requirements for availability and reliability. Outside the security domain, the main source of regulation-related work was duplication of all infrastructure into the service provider’s second data center. The data centers themselves, as well as the personnel administering the system and its infrastructure were subject to meticulous security screening. Proper level of access control was enforced, the server rooms’ CCTV system extended to cover the new servers, and remote connection practices were reviewed. All personnel involved with the client was to be security checked by the national Finnish Security Intelligence Service. Data itself must reside within the country’s borders and even the infrastructure’s configuration data and work tickets in the Configuration Management Database (CMDB) were to be made inaccessible for personnel who are not security checked. 4.2.4. Technical Tasks: System Hardenings As an infrastructure project, the main technical obstacle was securing the hardware, operating systems, middleware and the application (the IDM system) against security threats. The bulk of this work was performed by one of the interviewees, the security developer. Hardening in this case covered analyzing and removal, or blocking, of hardware and software features, and testing against the threats. The purpose is to reduce the attack surface of the platform under construction and protect it from both internal and external threats, as well as minimize the components where potential future vulnerabilities may emerge. On hardware level, hardening means controlling the network interfaces and the surrounding local area network, routing and traffic rules. It also covers any and all hardware maintenance interfaces, typically accessible through the network. On operating system and software level, the operating system’s or software manufacturers, such as Microsoft, provide their own hardening instructions which were used as a baseline. These were combined with the best practices of the consultant company’s own experiences and policies, and the explicit instructions and requirements given by the client organization. These included uninstalling a large number of modules and services, disabling a number user accounts and policies, and enforcing a number of others, and restricting access and privileges throughout the system. The same principles were applied to each software component installed on the server platform. By definition, all access rules and user validations had to be applied to the infrastructure services provided for the server platform; these include software and hardware patching, network access, malware protection, hardware and application monitoring, and backups. The inherent uncertainty of security testing, together with the inter-dependency of the components affected by the removal and alteration of the services and restriction of rights made predictable time-boxing of these tasks so unreliable that the team decided to execute them as spikes. 4.3. Cost of Security Work The Scrum Master estimated that the extra work caused by the regulations was approximately 25 to 50% of the project’s total work load. As accurate billing information was not available, this was accepted as the best estimate of the real cost of the security work. Most of the overhead comprises from the documentation of the solutions. Security-related documentation was created by all team members: project manager and the security developer (architect) created most of the documentation, and the Product Owner as the client’s representative made sure that the correct regulations were applied. Developers were burdened by creating appropriate level of security-oriented technical documentation of all their work, especially related to operating system and application hardening procedures. The hardening process itself lasted for four months, presenting the largest tasks in the project. Changes to the production environment were further complicated by ITIL-based requirement of strict Change Advisory Board processing of each change that was made. 5. ANALYSIS AND DISCUSSION The research objective for this study is to identify best practices as well as hindrances of using agile software development. This case provides a good view how unmodified Scrum lent itself to a situation, where a large amount of regulations caused extra work with uncertainties in work estimates. Due to these uncertainties, or the large amount of presumably indivisible work included in some of these tasks, the team was simply not able to fit certain features into the sprint structure. Also, in contradiction to traditional security view, iterative and incremental approach to development and building forced the project team, steering group and also the client to rethink how the end product’s and its management’s security assurance was to be provided. In a sequential waterfall model the security deliverables and tasks were tied into the predetermined milestones, without the flexibility provided by Scrum. As presented in Figure 2, the project was in practice executed partly following a ‘waterfall’ model, yet without milestones fixed in advance; these waterfall processes ran alongside the main project, and their deliverables were then included in the project outcomes. Based on the above, in the strictest sense the project organization failed utilizing Scrum methodology to create the product, although the superficial requirements were fulfilled -- customer was mostly interested in progress reports and the timely delivery of the complete and standard compliant end product. The failures were partly due to inflexibilities on both the company developing the system, and the client demanding a formal and fixed approach to Scrum. Sprint planning for tasks, for example, called for features to be completed during the sprint. When this was already known to be extremely unlikely, these features were agreed to be performed as spikes. In retrospect, this was most likely caused by the thinking that security features were perceived as overhead and not actual features in the product, while in reality the security features were essential to the product itself. Even without applying any formal modifications to Scrum, at least one of the “secure Scrum” features, presented in Chapter 2.1 and Figure 1, was taken into use, as the project architect assumed the role of security developer. In practice, most of the physical work triggered by security requirements was done in spikes outside the sprints. When the work is done in a non-iterative way, just letting them run along the project, the benefits of Scrum are lost. Based on the project manager’s estimate of cost increase factor is 1.5-2x, caused by the security features, and thus there exists a large saving potential in rearranging the security work. Attempting a new approach and restructuring the work into iterations is recommendable in future projects. Initial spikes are acceptable, but in this case the team failed to utilize the experience gained from them, and continued to implement similar security features as spikes even after the first one. This is represented in Figure 2 by the OS hardening spikes H1, H2 and H3. The team defended their selected approach by stressing the inherent differences in the physical environment and management practices of the development, quality assurance and production environments, but also from the undertones of the developer’s interview, it was perceivable that the attitude towards using Scrum in this kind of project was negative to start with. Time-boxing the uncertain tasks to three-week sprints, having to perform the demonstrations after each sprint, and other Scrum routines were perceived to some degree as distractions from the main work. This mentality seemed to affect some members of the team despite the personnel was trained in the Scrum method and the tools necessary. During the interview, the team was quite uniform on the key success factors of the project. They emphasized the importance of document management, and very strict requirement management. The amount of overlapping and sometimes outright conflicting security requirements even within the VAHTI requirements increased the Scrum Master’s workload substantially. Use of Scrum was deemed to have overwhelmingly positive effect, by enabling faster reaction to changes in the requirements and directness of the client feedback. Also the team praised the frequent sprint planning’s effect of keeping the team focused, in comparison to the very long spikes run during the project. In retrospect, the team regretted not utilizing the Product Owner more already in the beginning, as direct channels to the client were viewed to be very valuable during the implementation. Also, the client’s key personnel were not always present at sprint demos, which caused unnecessary questions and insecurity on the client’s side, despite the features were already completed and already once comprehensively demonstrated. The effect of Scrum to the efficiency of the work was estimated very positive. The extra cost of the security was partly compensated by the fact that rigorous testing and documentation of the technical solutions had also a positive impact on the quality of the work, improving the system’s reliability and availability. It can also be argued that the cost of security work is lower when it is done proactively rather than repairing an old system or trying to recover a breached one. 6. CONCLUSION AND FUTURE WORK This study has presented a case of building an infrastructure and setting up an identity management software platform for a governmental customer. The customer agency had their own set of security regulation and requirements, namely the VAHTI instructions. In addition to the government requirements the service provider contracted to build the system was committed to several international ISO/IEC standards, as well as their own management frameworks and sometimes complex financial reporting rules. Both the agency and the service provider’s project management offices required employing the Scrum methodology as the project management framework. The research was conducted in a post-project semi-structural interview, and the information was gathered based on their experiences and notes of the project. The parties involved are anonymized, and only publicly available information about the project and the regulations involved was to be disclosed. Scrum was initially applied in its standard form, with no formal security extensions. Security engineering activities were integrated into the product backlog, and performed within sprints whenever possible. During the project, the team adapted to the security work by creating a de facto security developer role, and many of the security engineering tasks ended to be performed outside of the regular sprint structure: typically, security assurance is based on evidence gained through security testing, which also in this case had an adverse effect on the team’s ability to schedule and time-box the items that were subject to these tests; these were performed as spikes instead. The same technique was also applied to documentation, which was performed outside the main sprints, and audits and reviews, which were separately scheduled one-time tasks. The results of these spikes were still presented in sprint demos among the other artifacts and results. The reported issues at product deployment in production environment prompt for developing and applying a delivery model that provides the required security assurance without the interruption to iterative development. The team viewed the use of Scrum as a positive factor to project cost and quality, although arguably Scrum was not utilized to the maximum extent: important parts of the work were done in spikes outside of the main sprint flow, without attempts to utilize the experience gained from them to time-box the future tasks. This was seen to benefit the project, although an iterative and more exploratory approach to those tasks might have proved more benefits in the long term, and it is still a possibility that the experience gained in this project can be utilized in similar future projects. The project team still regarded the security engineering activities and providing the required security assurance to compose a significant amount of extra work: at final stages, the work load effectively doubled. The initial approach in this project was more or less an unmodified textbook example of the Scrum method, but the team applied naturally certain security extensions. Simply conducting weekly product backlog refinement sessions was deemed essential for the project’s success. This project was a model case of two large entities that have decided to fit their organizations to work according to an agile framework. The nature of work itself has not changed, although the introduction of growing amount of security engineering and increasing regulation put an additional strain on the project’s requirement management. Agile methods have inherent preference to produce working solutions instead of spending time documenting them; in contradiction to this goal, the documentation of the solutions is a key deliverable in the field of security. Scrum will continue to be used by both organizations, and as the team’s experience grows, we expect also the cost of the secure systems development to drop, while their quality and security gets better. Based on the experiences gained in this case, Scrum has shown the potential to be suitable for security-oriented development work. With certain additions and modifications, it can be used to provide the security assurance required by the regulators in the ICT and software industry. Especially when applied by an organization capable to adjust itself to fully utilize the flexibility of incremental agile frameworks, instead of partially reverting back to sequential mode of operations. We are yet to observe a pure agile project where security standards are in a central role: truly integrating security engineering processes and security assurance activities without losing the agile values and benefits gained by the use of those methods is still a work in progress. ACKNOWLEDGMENT The authors gratefully acknowledge Tekes - the Finnish Funding Agency for Innovation, DIMECC Oy, and the Cyber Trust research program for their support. This paper extends a conference paper “Case Study of Security Development in an Agile Environment: Building Identity Management for a Government Agency” by Rindell, Hyrynsalmi & Leppänen (2016). REFERENCES Kalle Rindell is an enthusiast of computers, Internet and security, with working experience of nearly two decades as a programmer and R&D engineer. He is currently completing his PhD at University of Turku, Finland in the fields of security and agile software development, while working for the CGI Group as a consultant. Sami Hyrynsalmi, DSc (tech), is a nerd who has always enjoyed working with programming and computers. After graduating as MSc in software engineering from the University of Turku in 2009, he decided to focus on the real issues and started his doctoral dissertation work on mobile application ecosystems. After successfully defending his thesis in 2014, he has focused on various themes from software and its production to business ecosystems, software metrics as well as to computer games. Currently, he is working as an Assistant Professor (tenure track) of Software Product Management and Business in TTY Pori at Tampere University of Technology. Ville Leppänen is a professor in software engineering and software security at the University of Turku (UTU), Finland. He has 180 international conference and journal publications. His research interests are related broadly to software engineering and security, ranging from software engineering methodologies, project management practices, and tools to security and quality issues, and to programming languages, parallelism, and architectural design topics. Leppänen is currently leading six research and development projects. He acts as the head of Software Engineering (UTU) and leader of Software Development Laboratory of Turku Centre for Computer Science. ENDNOTES 1 https://www.vahtiohje.fi/web/guest/home_ 3 http://www.itil.org.uk/ 4 https://www.vahtiohje.fi/web/guest/708,(available in Finnish only) 5 https://www.atlassian.com/software/jira/agile_ 6 http://www.supo.fi/security_clearances_
{"Source-Url": "https://users.utu.fi/kakrind/publications/17/rindell_IJSSE_8(1).pdf", "len_cl100k_base": 9249, "olmocr-version": "0.1.49", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 32853, "total-output-tokens": 11274, "length": "2e13", "weborganizer": {"__label__adult": 0.0003561973571777344, "__label__art_design": 0.0003046989440917969, "__label__crime_law": 0.0007777214050292969, "__label__education_jobs": 0.0013933181762695312, "__label__entertainment": 3.921985626220703e-05, "__label__fashion_beauty": 0.0001609325408935547, "__label__finance_business": 0.0005135536193847656, "__label__food_dining": 0.0002434253692626953, "__label__games": 0.0004634857177734375, "__label__hardware": 0.0006532669067382812, "__label__health": 0.00042819976806640625, "__label__history": 0.00018131732940673828, "__label__home_hobbies": 8.130073547363281e-05, "__label__industrial": 0.0004651546478271485, "__label__literature": 0.0001506805419921875, "__label__politics": 0.0002536773681640625, "__label__religion": 0.0003132820129394531, "__label__science_tech": 0.00875091552734375, "__label__social_life": 9.357929229736328e-05, "__label__software": 0.0049285888671875, "__label__software_dev": 0.978515625, "__label__sports_fitness": 0.00028777122497558594, "__label__transportation": 0.0004351139068603515, "__label__travel": 0.00015401840209960938}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52812, 0.02512]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52812, 0.20123]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52812, 0.95999]], "google_gemma-3-12b-it_contains_pii": [[0, 2867, false], [2867, 7012, null], [7012, 11342, null], [11342, 13461, null], [13461, 17322, null], [17322, 21071, null], [21071, 25052, null], [25052, 28471, null], [28471, 32531, null], [32531, 36907, null], [36907, 41646, null], [41646, 46247, null], [46247, 47468, null], [47468, 50831, null], [50831, 52812, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2867, true], [2867, 7012, null], [7012, 11342, null], [11342, 13461, null], [13461, 17322, null], [17322, 21071, null], [21071, 25052, null], [25052, 28471, null], [28471, 32531, null], [32531, 36907, null], [36907, 41646, null], [41646, 46247, null], [46247, 47468, null], [47468, 50831, null], [50831, 52812, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52812, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52812, null]], "pdf_page_numbers": [[0, 2867, 1], [2867, 7012, 2], [7012, 11342, 3], [11342, 13461, 4], [13461, 17322, 5], [17322, 21071, 6], [21071, 25052, 7], [25052, 28471, 8], [28471, 32531, 9], [32531, 36907, 10], [36907, 41646, 11], [41646, 46247, 12], [46247, 47468, 13], [47468, 50831, 14], [50831, 52812, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52812, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
fe50da441ba5fe07f1edb95e65f505c3f9885d39
Towards Building a Forensics Aware Language for Secure Logging Shams Zawoad¹, Marjan Mernik², and Ragib Hasan¹ ¹ University of Alabama at Birmingham Birmingham, AL-354209, USA {zawoad,ragib}@cis.uab.edu ² University of Maribor Maribor, Slovenia marjan.mernik@um.si Abstract. Trustworthy system logs and application logs are crucial for digital forensics. Researchers have proposed different security mechanisms to ensure the integrity and confidentiality of logs. However, applying current secure logging schemes on heterogeneous formats of logs is tedious. Here, we propose Forensics Aware Language (FAL), a domain-specific language (DSL) through which we can apply a secure logging mechanism on any format of logs. Using FAL, we can define log structure, which represents the format of logs and ensures the security properties of a chosen secure logging scheme. This log structure can later be used by FAL to serve two purposes: it can be used to store system logs securely and it will help application developers for secure application logging by generating the required source code. Keywords: DSL, Secure Logging, Audit Trail, Digital Forensics. 1. Introduction In recent years, the number of digital crime cases has increased tremendously. An annual report of the Federal Bureau of Investigation (FBI) states that the size of the average digital forensic case is growing 35% per year in the United States. From 2003 to 2007, it increased from 83 GB to 277 GB [9]. Various logs, e.g., network log, process log, file access log, audit trail of applications, play a vital role in a successful digital forensics investigation. System and application logs record crucial events, such as, user activity, program execution status, system resource usage, network usage, and data changes through which some important attacks can be identified, e.g., network intrusion, malicious software, unauthorized access to software, and many more. Logs are also important to ensure the auditability of a system, which is crucial in making a system compliant with various regulatory acts, such as, the Sarbanes-Oxley Act (SOX) [7] or the Health Insurance Portability and Accountability Act (HIPAA) [36]. Keeping system audit trails and reviewing them in a consistent manner is recommended by the National Institute of Standards and Technology (NIST) as one of the good principles and practices for securing computer systems [35]. While the necessity of logs and application audit trails are indisputable, the trustworthiness of this evidence remains questionable if we do not take proper measures to secure them. In many real-world applications, sensitive information is kept in log files on an untrusted machine. As logs are crucial for identifying attackers, they often attack the logging system to hide the trace of their presence or to frame an honest user. Very often, experienced attackers first attack the logging system [2, 3]. Malicious inside users, colluding with attackers can also tamper with logs. Moreover, forensics investigators can also alter evidence before it is presented in a court of law. To protect logs from these possible attacks, we need a secure logging mechanism. Researchers have already proposed several secure logging schemes [1, 2, 21, 32, 41], which are designed to defend such attacks. However, ensuring the privacy and integrity of the logs is costly given that it requires special knowledge and skill on developers’ side. To implement a secure logging scheme, application developers need complete access to the logs. However, providing developers with full access to sensitive logs definitely increases the attack surface. This opportunity enables the malicious developers to violate the privacy, acquire and sell sensitive business or personal information, and most importantly can keep a back door for future attack. Adding secure application audit trails can also be burdensome for developers. It also increases the application development cost. On the other hand, system administrators, who have access to network logs or process logs, may not have sufficient knowledge for developing a secure logging scheme. In this paper, we propose Forensics Aware Language (FAL) – a domain-specific language (DSL) [23] to assist system administrators and application developers for maintaining system logs and application audit trails securely, which is crucial for digital forensics investigations. A DSL is designed for a particular domain and has great advantages over general-purpose languages for that specific domain. A DSL provides higher productivity by its greater expressive power, the ease of use, easier verification, and optimization [19, 23, 37]. Based on our proposed DSL FAL, system admins can define log structure and parse a log file according to the structure. They can also define the security parameters to preserve the integrity and confidentiality of logs. To accomplish this, they only need their domain knowledge related with system logs. Using FAL, a software security analyst can define the required audit trail structure and can generate code for a general-purpose language (GPL), e.g., Java, C# to store the audit logs securely. **Contribution.** The contribution of this work is two-fold: - We propose the first Domain-Specific language FAL, which can be used to ensure the security of system logs and application audit logs. - We show all the DSL development processes, which can be served as a guideline for future DSL development. This paper is an extension of [42]. In this paper, we augmented the scheme presented in [42] by providing the complete translational semantics of FAL. We also made FAL more robust by providing a new feature. Previously, the delimiter to parse a system log file was fixed. In the new version, we added the user provided delimiter feature, by modifying all the DSL development steps described in [42] (such as abstract syntax, syntactic domain, grammar, translational semantics, and implementation). We also describe the life cycle for DSL development that we followed during the development of FAL. The rest of the paper is organized as follows. Section 2 describes the background of secure logging and the motivation of developing a DSL to solve some of the challenges of secure logging. Section 3 discusses the life cycle for DSL development. In Section 4, we describe the development and implementation of FAL. Section 5 describes two practical applications of FAL in two different scenarios. Section 6 discusses the related work in secure logging and usage of DSL in security domain. Finally, we conclude in Section 7. 2. Background and Motivation In this section, we present the necessity of a secure logging scheme, common approaches towards secure logging, and how a DSL can help to mitigate some of the challenges of secure logging. 2.1. Secure Logging As logs are crucial for digital forensics investigation, attackers often target logs to destroy the evidence. There can be two types of attacks on logs: - **Integrity**: Integrity of logs can be violated in three ways – an attacker can remove log information, re-order the log entries, and add fake logs. A malicious user can launch these attacks to hide the trace of his illegal activities from forensics investigation, or to frame an honest user. Timing of an incident is crucial for forensics investigation. Hence, re-ordering the log entries can be important for an attacker, which can give him a chance to produce some alibi. - **Confidentiality**: Activity of users, as well as, sensitive private information about the users can be identified from various system logs and application logs. From the application logs of a business organization, we can also trace out very sensitive business information. This information has high value to attacker. Hence, an attack on the confidentiality of logs can be highly beneficial to attackers. The above attacks can come from different types of attackers: - **External Attackers**: An external attacker can be a malicious user intending to attack users’ privacy from the logs, or try to modify logs to hide the trace of any attack (e.g., network intrusion, malware, spyware). A dishonest forensic investigator can also be an external attacker, as malicious investigators can alter the logs before presenting to court. - **Internal Attackers**: A more crucial attack can come from insider attackers colluding with malicious users. A dishonest insider can be a system admin, database admin, or application developer. As system admins have access to all the system logs, they can always tamper with logs. Application logs and some of the system logs can be stored in database. In this case, threats can come from database admin. A malicious database admin can modify logs without leaving any trace of the modification. Application developers can modify application logs, or can create a backdoor to collect the application logs. Besides tampering the logs, these insiders can also attack on the privacy of users. They can collect and sell sensitive business and personal information derived from the logs. To defend the confidentiality and integrity of logs, researchers have proposed several secure logging schemes [1, 21, 32, 41]. The commonalities among these secure logging schemes are: encrypting sensitive fields to protect the confidentiality, and maintain a hash-chain of the logs to protect the integrity of logs. Hash-chain maintains the chronological information of data. Hence, if any log is missing from the chain or if there is a reordering of the logs, then this alteration can be detected from the hash-chain. Hash-chain of one log entry is calculated using the hash of its previous entry. In this way, it preserves the chronological information. 2.2. Motivation Though there are some proven secure logging schemes, developing and maintaining a scheme is always challenging because of the following reasons: 1. One of the problems in developing a universal secure logging scheme is that logs exist in heterogeneous format. Unfortunately, there is no standard format of logs. Hence, two types of systems logs can look completely different. Moreover, same log can vary by operating systems. For example, format of a process log entry is different in MacOS and Debian. 2. To build a secure logging scheme, we need to permit the logging scheme developers to access the logs. Developers’ accessibility to crucial log information certainly increases the attack surface. Earlier, we only need to trust system admins; adding developers in the loop introduces an extra level of trust. Developers might place a back door to collect plain log information and can violate the privacy of users. 3. For application logging, application developers need to add secure application logging code for every scenario. Most of the cases, we need to log the database operations – Add, Update, Delete. Through these logs, we can identify who has executed some specific operations on a specific data. Writing code for all possible scenarios is burdensome for developers. On the other hand, skipping one important logging method may turn out to be crucial. We believe that a well-defined DSL would be able to resolve the above challenges. For system logs, with the help of a DSL, we can shift the responsibility of developing a secure logging scheme from programmers to system admins. As system admins already have the domain knowledge about system logs, they can easily define the required security parameters with the help of a DSL. In this way, we can reduce one level of attack surface. Since one of the main challenges of integrating a secure logging scheme is that logs are in heterogeneous formats, a DSL should also deal with this issue. A secure logging scheme that is already integrated for one log format, need to be changed for another log format. Instead of using a GPL, if we use a DSL that can cope up with heterogeneous formats of logs, the amount of code that has to be changed can be highly reduced. Moreover, we do not need to re-implement a scheme, when the log format changes because of any system migration. For application logs, a DSL can generate required application logging code to reduce the application development cost. To integrate a secure logging scheme, knowledge about existing encryption and hashing algorithms should also be integrated with a DSL. For FAL, this knowledge is embedded with the specialized Application Programming Interface (API) (details in Section 4.5). However, if we want to use a proprietary encryption or hashing algorithm, we need to upgrade the DSL to provide the knowledge of that encryption or hashing algorithm. For example, FAL supports the common hashing algorithms: MD5, SHA-1, SHA-256, and SHA-512. To use the SHA-1024 hashing algorithm, we need to upgrade FAL and the API to integrate the SHA-1024 hashing algorithm. Hence, our proposed DSL can only handle established encryption and hashing algorithms. 3. DSL Development Methodology A DSL life cycle comprises of the following phases: decision, domain analysis, DSL design, DSL implementation, DSL testing, DSL deployment, and DSL maintenance [5, 23]. During the decision phase, several criteria need to be evaluated and contrasted to find out whether the development of a new DSL is a solution to our problem. In this respect, decision patterns [23] might be helpful as they indicate those situations of the past, where the introduction of a DSL into a process had been successful. If the decision about implementing a DSL is found to be positive during the initial phase, then the next stage is a DSL development, which is a topic of this Section. It is comprised of the following phases: domain analysis, DSL design, and DSL implementation. These phases are crucial during a DSL life cycle and an appropriate methodology is needed to do it correctly. Many DSLs have been developed from scratch by informally performing a particular phase (domain analysis, DSL design, DSL implementation), certain parts of a phase (e.g., semantic part of a DSL design), or even all the phases. There are several problems with the ‘from scratch’ approach. The more notable problems are: often an unsatisfactory DSL is developed and several costly re-development iterations are needed, difficult maintenance, and that DSL evolution is hard. For example, often problems that should have been identified within early phases only become visible during later phases. Hence, such an informal approach to DSL development is not recommended. In this section, a particular formal DSL development methodology is described. Namely, domain analysis, DSL design, and DSL implementation are not narrow processes and various formalisms can be applied. The task of domain analysis is to select and define the domain of focus, collect appropriate domain information, and integrate them into a coherent domain model that represents concepts within a domain and relationships within the domain concepts. Here, several existing domain analysis methodologies can be used. In particular, our recommendation is to use Feature-Oriented Domain Analysis (FODA) [15] since common and variable properties of a domain are easy to identify in feature diagrams (i.e., variation points). In fact, the list of variations indicates precisely which information is required for specifying an instance within a system. This information must be either directly specified within programs written in a DSL or be derivable from them. On the other hand, the commonalities are used for defining the execution model (through a set of common operations) and the primitives of the language. The outputs from domain analysis are: terminology, concepts, commonalities, and variations. These are easily identified from FODA feature diagrams [34] and should be used as inputs into the next phase – DSL design. Designing a language involves defining the constructs within the language (syntax) and giving semantics to the language. Both sub-phases, syntax and semantics, can be managed informally or formally. The advantages of formal syntax and semantic specification of programming languages are well-known: the structure and meaning of a program is precisely and unambiguously defined, and it offers a unique possibility for the automatic generation of compilers or interpreters. Programming languages that have been designed using one of the various formal methods for syntax and semantic definitions have better syntax and semantics, lesser number of exceptions, and easier learning curve. Moreover, researchers have recognized the possibility that many other language-based tools could be generated from formal language specifications. Therefore, many language implementation systems not only automatically generate a compiler/interpreter but also complete language-based environments including editors, type checkers, debuggers, various analyzers, and animators [13]. The following formal methods have been used for DSL syntax definition: BNF, FDL [14], metamodels, DTD, and XML Schema. The powers of all these formal methods for syntax definition are the same. Hence, transformations between different syntax descriptions are more or less easy to achieve. In our DSL development methodology, we opted for BNF since many language implementation systems (i.e., compiler generators [6,8,12,24]) use variants of BNF. Semantic formalisms are usually based on abstract syntax instead of concrete syntax. Hence, both forms need to be developed as concrete syntax is later required when parsing. Whilst different syntax formalisms are equivalent, the situation is quite different for the semantics, where approaches such as attribute grammars, axiomatic semantics, operational semantics, denotational semantics, and translational semantics are complementary, and used by different stakeholders. For example, attribute grammars are used by compiler writers, whilst axiomatic and denotational semantics are used by language designers to prove various language properties without concentrating on particular implementation. On the other hand, operational semantics define the meaning of the language through configuration changes and is closer to the implementation on virtual machines. Another distinction amongst different semantic formalisms is whether they are able to describe the static and/or dynamic semantics of a language. In our DSL development methodology, we used the translational semantics for code generation. ![Fig. 1: FAL Development Life Cycle](image) Finally, after a DSL has been designed, it is time for its implementation. Different approaches for DSL development have been introduced in [23], such as interpreter, compiler/application generator, embedding, preprocessing, extensible compiler/interpreter, Commercial Off-The-Shelf (COTS), and the hybrid approach. Clearly, we want to select an approach that requires the least effort during implementation and offers the greatest efficacy to the end-user [17]. In our approach to DSL development, the formal specifications during design phase constitute an important part. Of course, it is harder to design a DSL formally than informally. This pays off during the DSL implementation phase, where a complete compiler/interpreter can be automatically generated. This is achieved in our case by mapping translational semantics to the language implementation system LISA [24], which is based on attribute grammars [16, 27]. Code generation using translational semantics is easy to implement in attribute grammars. The whole process of our methodology for DSL development is presented in Figure 1. Section 4 shows how our DSL development methodology has been used for the development of FAL. 4. The Domain-Specific-Language FAL 4.1. Domain Analysis Fig. 2: The Feature Diagram of FAL The very first step of designing a DSL is the detailed analysis and structuring of the application domain [38], which is provided by domain analysis. Output of domain analysis is a Domain Model, which gives us commonalities and variabilities, semantics of concepts, and dependencies between properties. Among various schemes of domain analysis, we choose FODA. In FODA, the results of the domain analysis are obtained in a feature model [33]. One of the most prominent ways of describing a feature model is feature diagram (FD). The FD is represented as a tree with nodes as rectangles and arcs connecting the nodes. Nodes determine the features, while arcs determine the dependency between the features. Nodes can be mandatory or optional, which are denoted by closed dots and open dots respectively. The FD of FAL is illustrated in Figure 2. From Figure 2, it is clear that a secure logging scheme constitutes of log structure and logging action. Every log structure must have fields. Every field must have a type. According to the chosen secure logging scheme, a field can be encrypted or not. Fields may have an index attribute, which can be used to specify the position of a field in an input. The type of a field can be IP, Text, Double, Integer, or Time. Time can be auto-generated, i.e. current system time, or can be index-based. For index-based field, value will be extracted from input file or argument list according to the position defined by the index. For encryption, various encryption algorithms, such as, RSA [31], AES [29] can be used. Some secure logging mechanisms use hashing and hash-chain to ensure the integrity of logs. Hence hashing algorithms, e.g., SHA-1\(^1\), SHA-256\(^1\), or MD5\(^2\) can be used. After defining a secure log structure, we need to use the structure for system or application logging. There can be two types of actions. First, for system logs, we need to parse the system log files according to a predefined structure, and apply the security features while storing. Second, for application log, we need to generate GPL code. For system logs, we must have a file name, and we may have public or private key file. By encrypting with public key, we can ensure that only the private key owner can decrypt certain information. Private key is also needed to create a signature on certain data and we can verify that signature using the public key. For application logging, we must have a table name, action, method, and may have public or private key file. Method is actually a method name of a GPL program, from where the action is called. An action can be adding a new record, update, or delete a record. For update and deletion, we may want to save the history of previous records. FDs represent the common features, which always exist in a system (commonalities) and optional features, which may or may not exist in a system (variabilities). Some of the commonalities identified from the FD of FAL are Fields, Type, etc., and some variabilities are Encryption Algorithm, Key, etc. From FD, the variation points can be easily identified (optional, one-of and more-of features). After the domain analysis, we can gather the following information – terminology, concepts, and common and variable properties of concepts with their interdependencies. ### 4.2. The Abstract Syntax After the domain analysis, the next step is to design the DSL, from which we will get syntax and semantics of the language. During the domain analysis using FODA, we identified several concepts in the application domain that needed to be mapped into DSL syntax and semantics. From the FD, we can identify the relationship between concepts/features in an application domain and non-terminals in a context-free grammar (CFG). Table 1 represents the mapping between application domain concepts and non-terminals in context-free grammars, which appears on the left hand side (LHS) and right-hand side (RHS) of CFG production. \(^1\) [http://www.itl.nist.gov/fipspubs/fip180-1.htm](http://www.itl.nist.gov/fipspubs/fip180-1.htm) Table 1: Translation of the application domain concepts to a context-free grammar <table> <thead> <tr> <th>Application domain concepts</th> <th>LHS non-terminal</th> <th>RHS structure</th> </tr> </thead> <tbody> <tr> <td>Secure Logging</td> <td>P</td> <td>Description of Log structure, and logging action.</td> </tr> <tr> <td>Log Structure</td> <td>LS</td> <td>Description of fields and security parameters.</td> </tr> <tr> <td>Fields</td> <td>F</td> <td>Field id, type (IP, Text, Double, Integer, Time), indexing feature, encrypted (or not encrypted).</td> </tr> <tr> <td>Index</td> <td>I</td> <td>Position of a field in input, or auto.</td> </tr> <tr> <td>Security Parameters</td> <td>S</td> <td>Description of encryption and hashing algorithm.</td> </tr> <tr> <td>Logging Action</td> <td>LA</td> <td>Description of system logging, or application logging statement.</td> </tr> <tr> <td>System logging</td> <td>SLA</td> <td>File name to be parsed to store securely, delimiter used in parsing, and the encryption key.</td> </tr> <tr> <td>Application log</td> <td>ALA</td> <td>Database operation, table id, GPL method name, encryption key, and history preservation option.</td> </tr> <tr> <td>System Log Encryption Key</td> <td>SLPK</td> <td>Public key or private key encryption file for encrypting system log.</td> </tr> <tr> <td>Application Log Encryption Key</td> <td>ALPK</td> <td>Public key or private key encryption file for encrypting application log.</td> </tr> </tbody> </table> Table 2: Abstract syntax of FAL \[ P ::= LS \ LA \\ LS ::= lid F S | LS1 ; LS2 \\ F ::= type fid I encrypted | type fid I | F1 ; F2 \\ S ::= encAlg hashAlg | encAlg | hashAlg | \epsilon \\ I ::= n | Auto \\ LA ::= SLA | ALA | LA1 ; LA2 \\ SLA ::= slaid file SLPK | slaid file SLPK delimiter \\ ALA ::= alaid action tid mid withhistory ALPK | alaid action tid mid ALPK \\ SLPK ::= pubKey | privKey | \epsilon \\ ALPK ::= pubKey | privKey | \epsilon \] Based on Table 1, we define the abstract syntax of FAL, which is presented in Table 2. The syntactic domains of variables are presented in Table 4. A FAL program consists of Log structures LS, and logging actions LA. Log structure LS defines field description F and security parameter S. There can be one or more LS. The field descriptor F specifies field type, id, index I, and encrypted status. There can be one or more fields in a log structure. Index I is either an integer number, or auto. A field that has auto as the index, indicates that the value of the field is not extracted from a certain position of a given log file (for system log) or does not bind with a position of function parameters (for application log). ### Table 3: Syntactic Domains <table> <thead> <tr> <th>Domain</th> <th>Type</th> </tr> </thead> <tbody> <tr> <td>$P \in Pgm$</td> <td>$LS \in \text{LogStructure}$</td> </tr> <tr> <td>$F \in \text{Field}$</td> <td>$LA \in \text{LogAction}$</td> </tr> <tr> <td>$I \in \text{Index}$</td> <td>$S \in \text{SecAttrs}$</td> </tr> <tr> <td>$\text{SLA} \in \text{SystemLog}$</td> <td>$\text{ALA} \in \text{AppLog}$</td> </tr> <tr> <td>$n \in \text{Num}$</td> <td>file $\in \text{FileSpec}$</td> </tr> <tr> <td>type $\in {\text{IP}, \text{Text}, \text{Double}, \text{Integer}, \text{Time}}$</td> <td>fid $\in \text{FileIdentifier}$</td> </tr> <tr> <td>tid $\in \text{TableIdentifier}$</td> <td>mid $\in \text{MethodName}$</td> </tr> <tr> <td>lid $\in \text{LogStructureIdentifier}$</td> <td>action $\in {\text{Add, Update, Delete}}$</td> </tr> <tr> <td>hashAlg $\in {\text{MD5, SHA-1, SHA-256}}$</td> <td>encAlg $\in {\text{RSA, AES}}$</td> </tr> <tr> <td>$\text{SLPK} \in \text{SysLogEncryptionFile}$</td> <td>$\text{ALPK} \in \text{AppLogEncryptionFile}$</td> </tr> <tr> <td>slaid $\in \text{SystemLogActionIdentifier}$</td> <td>alaid $\in \text{AppLogActionIdentifier}$</td> </tr> <tr> <td>pubKey $\in \text{PublicKeyFileSpec}$</td> <td>privKey $\in \text{PrivateKeyFileSpec}$</td> </tr> <tr> <td>delimiter $\in \text{ASCII Character Sequence}$</td> <td></td> </tr> </tbody> </table> The value of this field is generated from intermediate code, such as current time. Security parameter $S$ defines encryption and hashing algorithm. Logging action $LA$ can be either System logging action $\text{SLA}$ or Application logging action $\text{ALA}$. There can be one or more logging actions. $\text{SLA}$ specifies the system log file name, delimiter to be used in parsing, and encryption key. $\text{ALA}$ specifies the database action name, database table name, GPL method name, encryption key, and history preservation option. $\text{SLPK}$ and $\text{ALPK}$ specify the public key/private key for system logging and application logging respectively. ### 4.3. The Concrete Syntax After defining the abstract syntax, we experimented with different forms of concrete syntaxes to see how various constructs might look. For example, a log structure with two fields $\text{fromip}$ and $\text{user}$ can be defined using the concrete syntax as described in Listing 1. **Listing 1: FAL Log Structure** ```plaintext 1: Define netlog { 2: IP fromip Index 0 Encrypted; 3: TEXT user Index 1; 4: Use Encryption With RSA; 5: Use Logchain With SHA-1; 6: } ``` Here, $\text{fromip}$ field has data type IP, and $\text{user}$ is of TEXT data type. The $\text{Index}$ attribute represents the position of a field in the network log file. The $\text{Encrypted}$ attribute states that the field will be encrypted according to the encryption algorithm defined in line 4. If there are multiple encrypted fields, all the fields will be encrypted using the same encryption algorithm. Line 5 adds the flexibility of choosing any hash function. After defining a log structure, we define a logging action, which uses the pre-defined log structure. A concrete example of storing a network log file securely can be defined as follows (Listing 2): **Listing 2: FAL Logging Action** ```plaintext 1: Watchfile network.log Using netlog 2: { 3: Privatekey private.key; 4: Delimiter ";" 5: } ``` The *Watchfile* statement uses the previously defined ‘netlog’ structure to parse the ‘network.log’ file and uses the private.key, a private key encryption file to encrypt the *fromip* field defined in Listing 1. **Listing 3: FAL Program for System and Application Log** ```plaintext 1: SampleProgram[ 2: Define netlog { 3: IP fromip Index 0 Encrypted; 4: TEXT user Index 1; 5: Use Encryption With RSA; 6: Use Logchain With SHA_1; 7: } 8: Define patientlog { 9: TIME logtime Auto; 10: TEXT user Index 0 Encrypted; 11: INT refid Index 1; 12: TEXT message Index 2 Encrypted; 13: Use Logchain With SHA_256; 14: } 15: Watchfile network.log Using netlog { 16: Privatekey private.key; 17: Delimiter ";" 18: } 19: Watchtable Patient Using patientlog { 20: Action Edit Withhistory; 21: Method updatepatient; 22: Publickey public.key; 23: } 24: ] ``` When a language designer is satisfied with the look and feel of the language’s syntax, and possible additional constraints from domain experts or language end-users are fulfilled, the concrete syntax can be finalized. In Listing 3, a complete example of FAL program for secured system and application logs is described. We finalized the concrete syntax on the basis of several example programs. Finalizing the concrete syntax process can be executed in parallel with defining language semantics. In Table 5, we provide the concrete syntax FAL. Table 4: The concrete syntax of FAL | Program := #CCStart [LOG_STRUCT LOG_ACTION] | | LOG_STRUCTS := LG_STRUCTS | | LG_STRUCTS := LG_STRUCTS | LG_STRUCT | | LG_STRUCT := Define #Id {DEF} | | DEF := FIELDS SEC_ATTRS | | FIELDS := FIELDS FIELD | FIELD | | FIELD := #Type #Id IND_BASE ENC ; | | IND_BASE := Index #Number | Auto | | ENC := Encrypted | ε | | SEC_ATTRS := SEC_ATTRS SEC_ATTRIB | ε | | SEC_ATTRIB := Use SEC_STMT ; | | SEC_STMT := ENC_STMT | HASH_STMT | | ENC_STMT := Encryption With #EncAlgorithm | | HASH_STMT := Logchain With #HashAlgorithm | | LOG_ACTION := LG_ACTIONS | | LG_ACTIONS := LG_ACTIONS | LG_ACTION | | LG_ACTION := SYS_ACT | APP_ACT | | SYS_ACT := Watchfile #FileName Using #Id {ENC_KEY DELIM} | | ENC_KEY := PUB_KEY | PRIV_KEY | ε | | PUB_KEY := Publickey #FileName; | | PRIV_KEY := Privatekey #FileName; | | DELIM := Delimiter #UserDelimiter; | ε | | APP_ACT := Watchtable #CCStart Using #Id {PARAM} | | PARAM := DB_ACTION GPL_MTHD ENC_KEY | | DB_ACTION := Action ACT_NAME ; | | ACT_NAME := Add | ACT_HSTRY | | ACT_HSTRY := ACT_HSTRY_NAME HISTRY_STMT | | ACT_HSTRY_NAME := Edit | Delete | | HISTORY_STMT := Withhistory | ε | | GPL_MTHD := Method #Id ; | 4.4. Translational Semantics The advantages of using formal description for semantics of DSL (e.g., attribute grammars, denotational semantics, operational semantics) have been previously discussed in [23]. The authors of [23] discussed the ability to find problems in semantics before a DSL is actually implemented. In this work, we used translational semantics, which is simpler to define compared to denotational and operational semantics, and it is often used for defining semantics of domain-specific modeling languages [4]. Listing 4 provides Listing 4: Translational Semantics 1: TP : Pgm → Code 2: TP = (TLS[LS] ↓ 1 + TLA[LA] (TLS[LS]) ↓ 2) 3: TLS : LogStructure → Code × lid 5: TLS[LS1; LS2] = ((TLS[LS1]) ↓ 1 + (TLS[LS2]) ↓ 1, (TLS[LS1]) ↓ 2) 6: TF : Field → lid → Code 7: TF = addField(FieldType. + type + “,” + fid + “, ” + TI fid + TS fid, fid) 8: TF = addField(FieldType. + type + “,” + fid + “, false”);” 9: TF = addField(FieldType. + type + “,” + fid + “, true”); 10: TI : Index → Code 11: TI = “true,” + n 12: TI = “false,” + INTEGER.MAX_VALUE 13: TS : SecAttrs → lid → Code 14: TS = setEncryptionAlgorithm(“+ encAlg + “);” + lid + “.setHashingAlgorithm(“+ hashAlg + “);” 15: TS = setEncryptionAlgorithm (“+ encAlg + “);” + lid + “.setHashingAlgorithm(“+ hashAlg + “);” 16: TLA : LogAction → lid → Code 17: TLA = TableWatcher “ + alaid + “.setLogStructure(“ + lid + “);” + alaid + “.setAction(“ + action + “);” + alaid + “.setTable(“ + tid + “);” + alaid + “.setMethod(“ + mid + “);” + alaid + “.setMaintainHistory(true);” + TALPK alaid 18: TLA = TableWatcher “ + alaid + “.setLogStructure(“ + lid + “);” + alaid + “.setAction(“ + action + “);” + alaid + “.setTable(“ + tid + “);” + alaid + “.setMethod(“ + mid + “);” + alaid + “.setMaintainHistory(false);” + TALPK alaid 19: TALPK : AppLogToFile → alaid → Code 20: TALPK = setPublicKeyFile(“+ pubKey + “);” 21: TALPK = setPrivateKeyFile(“+ privKey + “);” 22: TALPK : AppLogToFile → alaid → Code 23: TALPK = setPublicKeyFile(“+ pubKey + “);” 24: TALPK = setPrivateKeyFile(“+ privKey + “);” 25: TALPK : AppLogToFile → alaid → Code 26: TALPK = setPublicKeyFile(“+ pubKey + “);” 27: TALPK = setPrivateKeyFile(“+ privKey + “);” 28: TALPK : AppLogToFile → alaid → Code 29: TALPK = setPublicKeyFile(“+ pubKey + “);” 30: TALPK = setPrivateKeyFile(“+ privKey + “);” the complete translational semantics of FAL. For each non-terminal in CFG (Table 2), a translational function is defined, which maps syntactic domains (Table 4) to their meanings – Java code that uses a specialized API for secure logging. For example, the meaning of non-terminal \textit{LS} is defined by translational function \textit{TLS}, which takes \textit{LogStructure} as input and return two components: first one is \textit{code} and the second one is \textit{lid} (object id of the \textit{LogStructure} class). Two different forms of \textit{LS} exist (see abstract syntax in Table 2). Hence, two translational functions \textit{TLS} are defined (lines 4 and 5 in Listing 4). The first translational function \textit{TLS} (line 4 in Listing 4) maps syntactic structure \textit{lid F S} into several Java statements: declaration of new object as an instance of class \textit{LogStructure}, setting a name to the newly created object by calling \textit{setName} method, and additional Java statements. The additional statements will be generated by applying translational functions \textit{TF} and \textit{TS} on non-terminals \textit{F} and \textit{S}, where \textit{F} and \textit{S} represent fields and security attributes respectively. This function also returns the \textit{lid} as the second parameter. Whilst, the second translational function \textit{TLS} (line 5 in Listing 4) defines the meaning of sequence of log structures (\textit{LS}1; \textit{LS}2). The generated code for \textit{LS}1 is simply concatenated with generated code for \textit{LS}2 (line 5 in Listing 4). In a similar manner, other translational functions are defined. 4.5. Implementation Various implementation techniques to implement a DSL exist, such as preprocessing, embedding, compiler/interpreter, compiler generator, extensible compiler/interpreter, commercial off-the-shelf, and hybrid approaches [23]. Kosar et al. [17] suggested focusing end-user usability while implementing a DSL. One implementation approach can be good in terms of effort needed to implement a DSL. However, the same approach may not be suitable for end-users. End-users may need extra effort to rapidly write correct programs using that DSL. If only DSL implementation effort is taken into consideration, then the most efficient implementation technique is embedding. However, the embedding approach might have significant penalties when end-user effort is taken into account (e.g., DSL program size, closeness to original notation, debugging, and error reporting). To minimize end-users’ effort, building a DSL compiler [17] is most often a good solution, but this process costs most from an implementation point of view. However, the implementation effort can be greatly reduced, but not as much as with embedding, especially if compiler generators (e.g., LISA [25], ANTLR [28], Silver [39]) are used. To implement FAL, we depend on source-to-source transformation technique. To transform a FAL program into an intermediate Java program, we build a FAL compiler using LISA, which has proven its usefulness in many other DSL projects [10,11,13,20,22]. The intermediate program uses a pre-build Java API. Design of the Java API is illustrated in Figure 3. Fields are represented by \textit{Field} class. The \textit{LogStructure} has a list of \textit{Field} object and the security attributes. The name field of \textit{LogStructure} is used to map with the database table name. \textit{LogAction} is an abstract class with the abstract method \textit{execute}, and it also has an instance of \textit{LogStructure}. \textit{FileWatcher} extends the \textit{LogAction} class and implements the execute method. The execute method is responsible to parse a log file and store it into the database with the help of \textit{LogStructure} and \textit{Field}. \textit{TableWatcher} also extends the \textit{LogAction} class and implements the execute method, which generates application logging code for developer. The \textit{SecurityUtil} class defines all the required encryption and hashing methods. After finalizing the Java API, we now know what the intermediate program will be. For example, the API provides `addField(Enum FieldType, String fieldName, boolean isEncrypted, int index, boolean isIndexBased)` method to add a new field. For using a specific encryption and hashing algorithm, the intermediate program can use `setEncryptionAlgorithm(String algoName)` and `setHashingAlgorithm(String algoName)` methods provided by the API. The FAL compiler will generate this intermediate program from a FAL program. To transform the FAL program to Java program correctly, we use the attribute grammar-based approach as LISA specifications are based on attribute grammars [16, 27]. It is capable to generate the compiler from formal attribute grammar-based language specifications. The first task to implement the compiler is to define the lexicon. Defining the lexicon in LISA is straightforward. It is showed in Listing 5. **Listing 5: Lexical specification for FAL in LISA** ``` 1: lexicon { 2: Number [0-9]+ 3: Id [a-z][a-z0-9._]* 4: Type IP | TEXT | INT | TIME | DOUBLE 5: EncAlgorithm RSA | AES 6: HashAlgorithm MD5 | SHA_1 | SHA_256 7: keywords Define | Use | Encryption | With | Logchain | Index | 8: Auto | Encrypted | Watchfile | Using | Publickey | Privatekey | 9: Watchtable | Action | Withhistory | Method | Parameter 10: FileName [a-z][a-z0-9._]*,[a-z]* 11: UserDelim "\[0x20 - \0x7E]+" 12: CCStart [A-Z][a-z0-9._]* 13: ActionName Add | Edit | Delete 14: Separator | ; | { | } | , | [ | ] 15: ignore \[0x09\0x0A\0x0D\]+ 16: } ``` To write the attribute-based semantic rules, first, we need to identify the required attributes for proper semantic analysis. Listing 6 presents the attributes that we used. *code* is the main synthesized attribute that produces the targeted GPL program. *ivar* is an inherited attribute that is used to propagate the variable name down the parse tree. *envs* is a synthesized attribute and *envi* is an inherited attribute; both are needed to maintain a HashSet of already defined variables. *errorMsg* is a synthesized attribute, required to report FAL error message to users. *ok* is a synthesized attribute that indicates whether a FAL program is correct or not. Finally, *PROGRAM.file* attribute is used to write the generated GPL program in a file. **Listing 6: Attributes for FAL in LISA** 1: attributes String *.code; 2: String *.ivar; 3: String *.errorMsg; 4: HashSet *.envs; 5: HashSet *.envi; 6: boolean *.ok; 7: BufferedWriter PROGRAM.file; An implementation of translational semantics (Listing 4) using LISA is a straightforward task. The implementation of translational function $TF$ (Lines 7 and 8 in Listing 4) is presented in Listing 7. Note, how closed both notations are. After compiling a FAL program, the required Java code will be automatically generated. The generated code utilizes predefined APIs to store logs, and generate audit trail code for ensuring the integrity and confidentiality of the logs. Listing 7: Semantic Rules in LISA 1: rule field { 2: FIELD ::= #Type #Id IND_BASE ENC \; compute { 3: FIELD.code = FIELD.ivar + ".addField( FieldType." + 4: #Type.value() + ";" + #Id.value() + ");" + 5: IND_BASE.code + ";" + ENC.code+"\);"; 6: }; 7: } 8: rule ind_base { 9: IND_BASE ::= Index #Number compute { 10: IND_BASE.code = "true," + #Number.value(); 11: } } 12: | Auto compute { 13: IND_BASE.code = "false,Integer.MAX_VALUE"; 14: }; 15: } 16: rule enc { 17: ENC ::= Encrypted compute { 18: ENC.code = "true"; 19: } } 20: | epsilon compute { 21: ENC.code = "false"; 22: }; 23: } 5. Practical Experience The goal of this section is to acquaint the reader with the practical experiences that were obtained by using FAL. We have therefore selected two case studies of FAL applications: - Preserve snort log securely using FAL. - Generate application logging code for a patient information update method in Java. 5.1. Preserve Snort log Snort\(^3\) is a free lightweight network intrusion detection system. The network logs generated by Snort play a vital role in network forensics. Hence, preserving the confidentiality and integrity of Snort logs is crucial from digital forensics perspective. Here is a sample Snort log: ``` 11/19-13:43:43.222391 11.1.0.5:51215 -> 74.125.130.160 TCP TTL:64 TOS:0x0 ID:22101 IpLen:20 DgmLen:40 DF ***A***F Seq: 0x3EA405D9 Ack: 0x89DE7D Win: 0x7210 TcpLen: 20'' ``` This log tells that the machine with IP 11.1.0.5 performed an http request to machine 74.125.130.160 at time 11/19-13:43.222391. Hence, when a machine attacks another \(^3\) http://www.snort.org machine, we can identify the attacker machine IP from the snort log. Let’s assume that a system admin decides to store the ‘from IP’, ‘to IP’, and time of network request securely. To protect the confidentiality of logs, among these three fields, the admin decides to encrypt ‘from IP’ and ‘to IP’ by the public key of law enforcement agencies using RSA algorithm. To protect the integrity of the logs, the system maintains hash-chain of the logs using SHA-256 hash function. The FAL program described in Listing 8 can be used to ensure all these properties. **Listing 8: FAL Program for Snort Log** ```java 1: SnortParser 2: Define snortlog { 3: IP fromip Index 1 Encrypted; 4: IP toip Index 3 Encrypted; 5: Time logtime Index 0; 6: Use Encryption With RSA; 7: Use Logchain With SHA-256; 8: } 9: Watchfile snortnetwork.log Using snortlog { 10: Publickey lawpublic.key; 11: } 12: } ``` The above FAL program will generate the Java code provided in Listing 9. **Listing 9: Translated Java Code from FAL** ```java 1: LogStructure snortlog = new LogStructure(); 2: snortlog.setName("snortlog"); 3: snortlog.addField(FieldType.IP,"fromip",true,1,true); 4: snortlog.addField(FieldType.IP,"toip",true,2,true); 5: snortlog.addField(FieldType.TIME,"logtime",true,0,false); 6: snortlog.setEncryptionAlgorithm("RSA"); 7: snortlog.setHashingAlgorithm("SHA-256"); 8: FileWatcher snortlogFileWatcher = new FileWatcher(); 9: snortlogFileWatcher.setLogStructure(snortlog); 10: snortlogFileWatcher.setFileName("snortnetwork.log"); 11: snortlogFileWatcher.setPublicKeyFile("public.key"); 12: snortlogFileWatcher.execute(); ``` Executing the Java code (Listing 9) will parse the snort log file and store them with the security parameter. However, FAL users do not need to understand the underlying API or the intermediate Java code generated by FAL. ### 5.2. Application Logging Application log is crucial for many applications including business and health care sectors. The methods that directly communicate with a database need to be logged. From these logs, later we can identify the person, who has modified (add/update/delete) any record. Application developer needs to integrate this logging feature with every method that updates database. FAL can generate the necessary logging code for application developer. **Listing 10: FAL Program for Application Logging** ```plaintext 1: PatientAppLog { 2: Define useraudit { 3: TIME logtime Auto; 4: TEXT username Index 0 Encrypted; 5: INT refid Index 1; 6: TEXT message Index 2 Encrypted; 7: Use Encryption With AES; 8: Use Logchain With SHA-1; 9: }; 10: Watchtable Patient Using useraudit { 11: Action Edit Withhistory; 12: Method updatepatient; 13: Privatekey serveraes.key; 14: } 15: } ``` We present a hypothetical scenario of a health care application, where we can use FAL for secure application logging. In the application, there is a Patient table and we want to store logs whenever any update is operated on patient’s record. For such an application, a log entry should include the user name, who executed an operation, patient id is being updated, a description of the operation, and time of operation. The security analyst of the application decides to encrypt user name, and the operation description using AES encryption algorithm and SHA-1 hash function to maintain the hash-chain of logs. The FAL program described in Listing 10 can be used to generate necessary application logging code. The translated Java code from FAL program (Listing 10) will generate the application logging method as described in Listing 11. ### 6. Related Work As logging information is one of the prime needs in forensic investigation, several researchers have explored this problem across multiple dimensions. There have been number of cryptographic approaches to address security for audit logs that are generated and stored on local logging servers [2, 3, 32]. Bellare et al. provided a solution for secure logging, where the encryption/decryption key of a logging server has been compromised but the attacker cannot read or modify the previously encrypted logs [2, 3]. Schneier et al. proposed a secure audit logging scheme, where the log information are stored in an untrusted machine [32]. The proposed cryptographic scheme ensures that after an attack, the attacker can acquire little or no information and cannot alter the sensitive log information without being detected. In their scheme, they used public key and private key based encryption, message authentication code, and hashing. According to Schneier’s scheme, a logging machine U opening a new audit log first establishes a shared secret key $A_0$ with a trusted remote server T. After each audit entry is generated, the current secret key $A_i$ is evolved into $A_{i+1}$ through a one-way function. Log entries are linked using a hash chain. Secure logging in cloud computing environment, where users can run virtual machine (VM) on cloud infrastructure requires special attention due to the inherent nature of clouds. Zawoad et al. proposed a secure logging scheme, SecLaaS for cloud computing environment [41]. While proposing SecLaaS, they considered the cloud service provider as dishonest who can collude with an attacker to tamper with the original logs. Alteration of original logs can hide the trace of malicious behavior of the attacker and impede the forensics investigation process. They used public/private key-based encryption and hash-chain scheme to ensure the privacy and integrity of cloud VM logs. The schemes stated earlier are against post-compromise insertion, alteration, deletion, and reordering pre-compromise of log entries. Though there are no DSL for secure logging, there are some DSLs for providing access control facility on the audit logs or provenance record and also for general-purpose access control. Ni et al. provided a XML-based access control language for general provenance model [26]. The language supports the specification of both actor preferences and organizational access control policies. Using this language, users can define and evaluate access control policies on application audit logs. It also supports specifying policies to a particular record and its fields. However, in this paper, the authors did not provide the language development process. Ribeiro et al. provided SPL, an access control language for security policies with complex constraints [30]. SPL supports simultaneous multiple complex policies by resolving conflicts between two active policies. Beyond the permission / prohibition, they also showed how to express and implement the obligation concept. This paper also did not provide the details of language development process. Weissmann proposed ACS (Access Control Sets), an access control language to solve the access control problem of UMLsec⁴ and aspect-oriented programming [40]. The proposed language particularly tries to solve the problem of undecidability in granting or denying a privilege, incapability of changing access controls without changing the... model, incapability of delegating access control specifications, and inflexibility of UML to define relations other than logical. Domain analysis of the language was executed informally, and the author provided BNF grammar for the language. The language is based on mathematical concepts of sets, hence the semantics of ACS closely follow that of set theory. This language can be used in a business application to define all the policies of the business organization, which can make both writing and modifying access control specifications easy by reducing the human interaction with the security code. 7. Conclusion and Future Work For proper digital forensics investigation, maintaining the trustworthiness of logs is comp- ulsory, and for this, we need a proper secure logging mechanism. To address the problem of secure logging mechanism, we have designed and implemented the domain-specific language FAL with the following benefits: - Shifting the responsibility of developing a secure logging schemes from application programmers to security experts, which in turn increases trustworthiness. - Required code to use specialized API for secure application logging is automatically generated. Hence, the effort and cost for developing secure logging scheme is reduced. - Heterogeneous formats of logs with any secure logging schemes can be easily handled. - Detail understanding of specialized API for secure logging is not needed for FAL users. One important feature that we are planning to incorporate with FAL is a timing option with system logging action. With this feature, users can define when they want to start the system logging and for how long they want to run the system logging option. Currently, FAL does not have user friendly error reporting feature, which we will integrate in future. For example, if a FAL user uses same index value for two fields, or uses an encryption algorithm that is not available with FAL, these problems should be detected at compile time and appropriate messages will be shown to user. For now, FAL generates audit- trailing code for Java. We will also work towards making FAL more robust so that it can generate audit-trailing code for other popular GPLs such as C++, C#, Python, Ruby, etc. To accomplish this goal, we need to develop the current Java API for other GPLs. Finally, FAL’s design needs to be validated by end-users by performing usability studies and control experiments [18]. References 1. Accorsi, R.: On the relationship of privacy and secure remote logging in dynamic systems. In: http://dx.doi.org/10.1007/0-387-33406-8_28 Computer Science and Engineering Department, University of California at San Diego (1997) 3. Bellare, M., Yee, B.: Forward-security in private-key cryptography. Topics in Cryptology, CT- 4. http://www4.in.tum.de/%CB%9Cumlsec/ Shams Zawoad is working as a graduate research assistant in SECuRE and Trustworthy Computing Lab (SECRETLab) and a Ph.D. student at the University of Alabama at Birmingham (UAB). His research interest is in cloud forensics, secure cloud provenance, cybercrime, and mobile malware. He received his B.Sc. in Computer Science and Engineering from Bangladesh University of Engineering and Technology in 2008. Before joining UAB, Zawoad had been working in software industry and developed authentication and authorization framework for several critical business applications, including an online payment system of Bangladesh Post Office. Dr. Marjan Mernik received his M.Sc., and Ph.D. degrees in computer science from the University of Maribor in 1994 and 1998 respectively. He is currently a professor at Dr. Ragib Hasan is a tenure-track Assistant Professor at the Department of Computer and Information Sciences at the University of Alabama at Birmingham. With a key focus on practical computer security problems, Hasan explores research on cloud security, mobile malware security, secure provenance, biomedical device security, social network security, and database security. Hasan is the founder of the SECUre and Trustworthy Computing Lab (SECRETLab) at UAB. He is also a member of the UAB Center for Information Assurance and Joint Forensics Research. Prior to joining the University of Alabama at Birmingham in 2011, Hasan was an NSF/CRA Computing Innovation Fellow and Assistant Research Scientist at the Department of Computer Science, Johns Hopkins University. He received his Ph.D. and M.S. in Computer Science from the University of Illinois at Urbana Champaign in October, 2009, and December, 2005, respectively. Before that, he received a B.Sc. in Computer Science and Engineering and graduated summa cum laude from Bangladesh University of Engineering and Technology (BUET) in 2003. He also served in the faculty of the Department of Computer Science and Engineering at BUET. Dr. Hasan’s research is supported by the Department of Homeland Security, the Office of Naval Research, the National Science Foundation, Facebook Inc., Google Inc., and Amazon Inc. He is a 2014 awardee of the prestigious NSF CAREER Award for his work on cloud security. Dr. Hasan is also a recipient of the 2013 Google RISE Award, a 2013 Information Society Innovation Fund Award. 2013 Deutsche-Welle Best of Blogs and Online Innovation award, a 2011 Google Faculty Research Award, the 2009 NSF Computing Innovation Fellowship and the 2003 Chancellor Award and Gold Medal from Bangladesh University of Engineering and Technology. He is a founding member of Wikimedia Bangladesh chapter, a long term administrator of Bangla and English Wikipedias, and also the founder of Shikkhon.com – an award-winning online education platform for advancing STEM education in rural areas of India and Bangladesh which has won the 2013 Google RISE Award and 2013 Information Society Innovation Fund Award. His BanglaBraille project has won the 2014 The Bobs award in the best innovation category. Received: December 1, 2013; Accepted: April 9, 2014.
{"Source-Url": "http://www.comsis.org/pdf.php?id=wp127-1308", "len_cl100k_base": 12929, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 60920, "total-output-tokens": 16443, "length": "2e13", "weborganizer": {"__label__adult": 0.0005950927734375, "__label__art_design": 0.0006170272827148438, "__label__crime_law": 0.004734039306640625, "__label__education_jobs": 0.00403594970703125, "__label__entertainment": 0.00011938810348510742, "__label__fashion_beauty": 0.00027751922607421875, "__label__finance_business": 0.0004854202270507813, "__label__food_dining": 0.0003688335418701172, "__label__games": 0.0009541511535644532, "__label__hardware": 0.0016393661499023438, "__label__health": 0.0007457733154296875, "__label__history": 0.00037550926208496094, "__label__home_hobbies": 0.0001499652862548828, "__label__industrial": 0.0006422996520996094, "__label__literature": 0.0005221366882324219, "__label__politics": 0.0005898475646972656, "__label__religion": 0.0004878044128417969, "__label__science_tech": 0.06683349609375, "__label__social_life": 0.00018024444580078125, "__label__software": 0.01296234130859375, "__label__software_dev": 0.9013671875, "__label__sports_fitness": 0.0002841949462890625, "__label__transportation": 0.0006189346313476562, "__label__travel": 0.0001709461212158203}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63178, 0.05325]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63178, 0.49261]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63178, 0.84605]], "google_gemma-3-12b-it_contains_pii": [[0, 2770, false], [2770, 6437, null], [6437, 9466, null], [9466, 12840, null], [12840, 16563, null], [16563, 19412, null], [19412, 20329, null], [20329, 23948, null], [23948, 26557, null], [26557, 29281, null], [29281, 30831, null], [30831, 32750, null], [32750, 34645, null], [34645, 38720, null], [38720, 40305, null], [40305, 41736, null], [41736, 43400, null], [43400, 45547, null], [45547, 48295, null], [48295, 50491, null], [50491, 53541, null], [53541, 57227, null], [57227, 60858, null], [60858, 63178, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2770, true], [2770, 6437, null], [6437, 9466, null], [9466, 12840, null], [12840, 16563, null], [16563, 19412, null], [19412, 20329, null], [20329, 23948, null], [23948, 26557, null], [26557, 29281, null], [29281, 30831, null], [30831, 32750, null], [32750, 34645, null], [34645, 38720, null], [38720, 40305, null], [40305, 41736, null], [41736, 43400, null], [43400, 45547, null], [45547, 48295, null], [48295, 50491, null], [50491, 53541, null], [53541, 57227, null], [57227, 60858, null], [60858, 63178, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63178, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63178, null]], "pdf_page_numbers": [[0, 2770, 1], [2770, 6437, 2], [6437, 9466, 3], [9466, 12840, 4], [12840, 16563, 5], [16563, 19412, 6], [19412, 20329, 7], [20329, 23948, 8], [23948, 26557, 9], [26557, 29281, 10], [29281, 30831, 11], [30831, 32750, 12], [32750, 34645, 13], [34645, 38720, 14], [38720, 40305, 15], [40305, 41736, 16], [41736, 43400, 17], [43400, 45547, 18], [45547, 48295, 19], [48295, 50491, 20], [50491, 53541, 21], [53541, 57227, 22], [57227, 60858, 23], [60858, 63178, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63178, 0.12984]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
c411f9d6ff227d87ad10f9cbaa536e162a506ad3
Clonos: Consistent Causal Recovery for Highly-Available Streaming Dataflows Pedro F. Silvestre Marios Fragkoulis Diomidis Spinellis Asterios Katsifodimos Delft University of Technology {P.F.Silvestre,M.Fragkoulis,D.Spinellis,A.Katsifodimos}@tudelft.nl ABSTRACT Stream processing lies in the backbone of modern businesses, being employed for mission critical applications such as real-time fraud detection, car-trip fare calculations, traffic management, and stock trading. Large-scale applications are executed by scale-out stream processing systems on thousands of long-lived operators, which are subject to failures. Recovering from failures fast and consistently are both top priorities, yet they are only partly satisfied by existing fault tolerance methods due to the strong assumptions these make. In particular, prior solutions fail to address consistency in the presence of nondeterminism, such as calls to external services, asynchronous timers and processing-time windows. This paper describes Clonos, a fault tolerance approach that achieves fast, local operator recovery with exactly-once guarantees and high availability by instantly switching to passive standby operators. Clonos enforces causally consistent recovery, including output deduplication, by tracking nondeterminism within the system through causal logging. To implement Clonos we re-engineered many of the internal subsystems of a state of the art stream processor. We evaluate Clonos’ overhead and recovery on the Nexmark benchmark against Apache Flink. Clonos achieves instant recovery with negligible overhead and, unlike previous work, does not make assumptions on the deterministic nature of operators. ACM Reference Format: 1 INTRODUCTION Stream processing systems have reached a high level of maturity in the last ten years, rendering them production-grade systems. Apache Flink [12], Apache Kafka [45], Samza [37], Jet [25] and other systems are serving important applications such as fraud detection in transactions, car-trip pricing, demand forecasting, stock trading, and even real-time traffic control. Making large scale-out deployments fault-tolerant, is the key factor that enabled modern stream processing systems to be used in production settings. Streaming applications require reliable, highly available, and high-performance systems that perform consistent processing. Consistency in the modern streaming systems nomenclature is referred to as exactly-once processing, which means that an incoming record will apply its effects to the computation state of the system exactly once, even in the event of failures. State of the art stream processing systems can provide exactly-once processing and high-availability under failures, but by design they have grown to support specific types of workloads summarized as analytics functions, for instance aggregates and joins. These computations, which are associated with streaming systems since their early times, are mostly deterministic and operate solely within system boundaries. In contrast, emerging classes of applications, such as general event-driven Cloud applications [11, 31], and Stateful Functions [27] involve custom nondeterministic business logic and frequent interactions with external systems and databases. Because of their event-based nature and performance requirements, such applications are increasingly executed as dataflows on stream processors. To support these applications effectively, dataflow systems need to embrace nondeterminism in their fault tolerance and high availability approaches. Existing fault tolerance and high-availability approaches [8, 9, 18, 29, 39] fail to address the exactly-once processing guarantees in the presence of nondeterministic computations, mainly because they make very strong assumptions that are not satisfied in modern stream processing workloads. Streamscope [35] and Timestream [38] assume deterministic computations, which restricts their applicability in practical scenarios while SEEP [36] and Rhino [18] additionally assume records to be timestamped with a monotonically increasing logical timestamp, failing to support out-of-order processing [34], which is supported by the majority of modern streaming systems today. Finally, Millwheel [3] is the only system that does not make these assumptions, but it requires a specialized transactional backend, such as Spanner [17], which requires atomic clocks not found in commodity clusters. In this paper we propose Clonos, a fault tolerance and high-availability method built on top of Apache Flink with the goal of supporting all existing workloads that Flink supports today, i.e., Clonos, as opposed to related work, supports nondeterministic computations. Although Clonos was built and tested on Apache Flink [12], it can be used in any stream processor that simply supports FIFO per-partition channels and coordinated checkpoints [15]. In this paper we make two important contributions. First, we describe a protocol and the associated system components to perform local recovery without the need to restart a complete streaming topology, aiming at high availability and low latency with exactly-once processing guarantees. No existing work has addressed this problem on a feature-rich production-grade system. Second, we deal with the inherent nondeterminism of practical stream processing workloads in a manner transparent to application programmers. To build Clonos, we implemented in-flight record logs and lineage-based replay for local recovery, standby tasks and live state transfer for high availability, and causal logging [20] for exactly-once consistent execution of nondeterministic computations and system functions. We present the recovery protocol, the high-availability mechanisms, the means to track nondeterminism, and a set of noteworthy system design and implementation decisions that render Clonos a practical replacement for Flink’s fault tolerance mechanism. In short, with this paper we contribute: - a novel fault tolerance approach that combines checkpointing, standby operators, and causal logging to: - provide exactly-once consistent local recovery and high availability on a production-grade system, and - support nondeterministic computations and system functions - an analysis of nondeterminism in stream processing and how Clonos guarantees exactly-once processing - thorough empirical experiments carried out in a realistic deployment The rest of the paper is organized as follows. Section 2 offers an overview of our fault tolerance approach. Section 3 outlines the stream processing model used in the paper and includes preliminaries regarding rollback recovery, causal logging, and Apache Flink’s execution model. Section 4 analyzes nondeterminism in stream processing and how it is addressed with Clonos, while Section 5 shows how Clonos guarantees exactly-once processing. Section 6 reports important design decisions necessary to make Clonos practically applicable. Finally, Section 7 presents a broad set of experiments and Section 8 presents related work. We conclude in Section 9. 2 APPROACH OVERVIEW Clonos’ main goal is to localize the impact of a failure to the minimum: only failed tasks need to recover from failure, and their upstream and downstream tasks take minimal action towards helping the failed tasks to recover. Recovering locally with exactly-once processing guarantees is challenging: in order to recreate the local state of the failed task, we need to use the most recent checkpoint of that task, and replay all the input records whose effects (on the state) have not been checkpointed. Another difficult problem is record deduplication: because some of the records have already been produced by a failed task, the recovery protocol needs to ensure that those messages are only processed once. The problem becomes even harder for nondeterministic computations that may produce different output (and operator state) for the same input across executions. Achieving all this in a highly-available manner where recovery has to be blazingly fast and the impact to the system’s performance minor, is very challenging. Besides local recovery, Clonos features a high availability mode where it uses standby tasks with preloaded state to speed up recovery and further lower the impact of a failure. Below we give an overview of our recovery protocol. 2.1 Normal Operation Figure 1 depicts a simple job with four tasks and their corresponding standbys. Each task executes a set of operators. **In-Flight Records.** These are records that have been produced by an operator since the last successful checkpoint; i.e., their effects have not yet been recorded to the downstream operators’ state. Tasks that send their output to downstream tasks (#1, #2, and #3) maintain a log of the in-flight records in memory until the next checkpoint is complete. This practice is the foundation of the upstream backup strategy [29]. Figure 1 captures a snapshot of the execution when the job is processing records of the yellow epoch. When a record reaches Task #2, it is processed and the output record (assuming for simplicity a function that produces a new record for each input record) is put in the output queue. Once the output record is transmitted over the network, it is added in the in-flight log. The in-flight log is segmented into epochs, such that whenever a checkpoint completes, all records in epochs prior to the checkpoint can be removed. **Log of Nondeterministic Events.** Tasks maintain a log [20] of determinants for recording information about nondeterministic events and operations. In addition, each task shares its log incrementally with downstream tasks as we describe in Section 4.3. We present the different types of nondeterministic events and operations in Section 4. **Standby Tasks & State Snapshots.** In high availability mode Clonos deploys standby tasks that mirror operator state, but remain idle in that they do not take part in data processing. Each standby task receives state snapshots of its corresponding running task after each checkpoint. 2.2 Recovery protocol Let’s assume that right after the execution snapshot depicted in Figure 1 a failure kills task #2. Figure 2 highlights the steps of our recovery protocol. 1. Activate New/Standby Task. The job manager initiates the fault recovery procedure, which starts a replacement task. In high availability mode, the topology maintains shadow/standby tasks that already contain the latest checkpointed state and remain idle until they are instructed to run by the job manager. 2. Reconfigure Network Connections. The standby task dynamically connects with the upstream and downstream task(s) of its predecessor in the topology. 3. Retrieve Determinant Log. The recovering task retrieves its predecessor’s determinant log from its downstream task(s). 4. Request In-Flight Records. In parallel to step 3, the standby task sends an in-flight log request to its upstream task(s) which specifies the epochs to replay. 5. Replay In-Flight Records. Each upstream task replays its in-flight records for the requested epoch and channel. In this case task #1 will replay the records of the yellow and green epoch in order. The recovering task (task #2) begins processing these records. Whenever it reaches a nondeterministic operation, the task instead reads from the determinant log the expected result of the operation. 6. Deduplicate Output. In parallel to step 5, the recovering task uses its determinant log to ignore output that its predecessor produced before failing. These output records are instead used to rebuild the in-flight log state. Clonos’ recovery protocol differs in a number of ways from upstream backup [29] where upstream tasks replay the output records to recovering downstream tasks. Specifically, our protocol: - uses checkpoints to reduce the duration of replay, - uses determinants to deduplicate records at the sender following a failure, and to capture many sources of non-determinism that we describe in the paper, and - is optimized for the architecture and capabilities of today’s distributed streaming systems, which feature shuffles, asynchronous data transfer, checkpoints, processing-time semantics, out-of-order processing, and communication with external services. 2.3 Applicability & System Requirements Clonos makes two assumptions. The first is the existence of reliable FIFO channels between a pair of tasks, i.e., for each channel, the downstream task receives all records in the same order that the upstream task has produced them. The second assumption is a checkpoint mechanism that creates snapshots of the system’s global state in regular intervals. Although our concrete implementation of Clonos is in Apache Flink v1.7, both assumptions are satisfied by mainstream streaming systems. For instance, Apache Samza [37], IBM Streams [30], and the latest version of Spark [8] also provide such FIFO channels, while Streams, Jet and Trill support checkpoints. Clonos’ approach can also be easily adapted to systems using uncoordinated checkpoints, through the use of backwards flowing checkpoint complete notifications. Clonos’ implementation requires extending multiple system components such as the job manager, scheduler, checkpoint & fault tolerance mechanisms, the network stack, and the base stream operators. Clonos’ implementation is available online.¹ 3 PRELIMINARIES This section provides the necessary background on the concepts used throughout the paper. We focus on current recovery mechanisms for stream processing and how these relate to rollback recovery schemes and causal logging. 3.1 Streaming Model Stream processing systems [3, 12, 30, 37] process unbounded collections of records continuously by ingesting them into a dataflow graph where edges denote record streams and vertices denote operators. Each operator, receives records from an upstream operator, applies a computation on those records, and produces output records that it sends to the next operator(s) downstream. Each operator that produces output retains output buffers for sending output records downstream efficiently in batches. 3.2 Checkpoint-based Rollback Recovery The main fault tolerance mechanism in modern scale-out streaming systems such as Apache Flink, IBM Streams, Trill, and Jet, is converging towards periodic Chandy Lamport-style [15] checkpoints ¹https://github.com/delftdata/Clonos Figure 2: Steps of the fault recovery protocol. of the system’s global state [10, 12, 14, 30]. To recover from a failure, systems roll back the state of all operators to the latest checkpoint and resume data processing from a specific input offset, possibly replaying part of the computation that was lost during failure. This stop and restart strategy can achieve exactly-once processing guarantees [10]: the effects of all input records will affect the system’s operator state exactly-once. However, as the execution graph grows, so does the downtime and latency incurred by the restart. In the event of a single failure the complete execution graph needs to be torn down and restarted from the latest global checkpoint. This can be fixed with local rollback recovery schemes that, in addition to the checkpoint also store in-flight records: a copy of all records they have produced since their last checkpoint. If a task fails, the system can roll back to its last checkpoint and replay its incoming records from the upstream tasks. Local recovery approaches that use in-flight logs [22, 36] can recover faster, but require two restrictive assumptions: i) that operators are deterministic (Section 4), i.e., reprocessing the same record a second time will yield the same output, and ii) that each record can be identified uniquely via a logical timestamp. During replay, tasks downstream from the failure can apply deduplication using these timestamps. Clones lift those long-standing restrictions using causal logging. 3.3 Log-based Rollback Recovery Log-based rollback recovery has been extensively studied in the context of distributed systems. A stream processing system can be seen as a message-passing system executing processes that send and receive messages. In the sequel, we will refer to messages as records. Log-based approaches rely on the piecewise deterministic assumption [21], which states that all nondeterministic events can be identified, and the system can log their determinants. To reproduce a nondeterministic event\(^2\) \(e\) (e.g., a timer, a random number, the result of a call to an external service/system), one must store the event and its determinant, denoted by \#e. However, having the determinants alone is not enough to replay the nondeterministic events. To replay record reception events, it is required that the record contents be replayed as well. This can be done in one of two ways: i) either the receiver can log the record contents together with the determinants or ii) the sender can keep a log of the sent messages that are not yet stable in a so-called in-flight record log. The second case is more common, because the first requires logging a large number of messages in stable storage. Instead, the in-flight record log can be kept in volatile memory, because after a failure it can be deterministically rebuilt using the input streams and determinants. 3.4 Causal Logging Causal logging [6, 19] is a log-based rollback recovery approach particularly well-suited to stream processing. Unlike pessimistic logging, causal logging maintains the deterministic log in-memory and unlike optimistic logging it ensures the always-no-orphans property [4] (Equation 1), allowing for localized recovery. An orphan process is defined as a process whose state depends on a nondeterministic event \(e\) that cannot be reproduced during recovery [21]. If a nondeterministic event cannot be reproduced, then the state of orphaned processes must be rolled back to before that event, in order to ensure consistency. \[ \forall e : \Box(\neg \text{Stable}(e) \implies \text{Depend}(e) \subseteq \text{Log}(e)) \] (1) where \(\text{Depend}(e)\) is the set of processes whose state was affected by \(e\) according to the happens-before relationship. \(\text{Log}(e)\) is the set of processes that have logged \(e\)'s determinant in volatile memory and \(\text{Stable}(e)\) is a predicate which becomes true when \(e\)'s effects are stored in stable storage (i.e. checkpointed). Finally, the operator \(\Box\) is the temporal always operator. Causal logging ensures that either i) all processes that depend on \(e\) have logged its determinant or ii) \(e\) is stable. If a set of processes \(\mathcal{F}\) fails, then for all non-stable events \(e\) either \(\text{Depend}(e) \subseteq \text{Log}(e) \subseteq \mathcal{F}\), in which case there is no orphan, or \(\text{Depend}(e) \subseteq \text{Log}(e) \not\subseteq \mathcal{F}\) in which case at least one surviving process has the determinant of \(e\), and can share it with the recovering processes. Causal logging can be optimized by ensuring that no unnecessary determinants are sent to processes that do not depend on them by strengthening the always-no-orphans property as follows. \[ \forall e : \Box(\neg \text{Stable}(e) \implies ((\text{Depend}(e) \subseteq \text{Log}(e)) \land (\Box(\text{Depend}(e) = \text{Log}(e))))) \] (2) This property conveys that, while \(e\) is not stable, all processes dependent on \(e\) must have logged it and \(\Box\) eventually \(\Box\) the ones that have logged it will be no more than those who depend on it. However, processes only depend on events of other processes if they receive messages from them, because those events happened before the delivery of the message. Thus, there is no need to send extra messages containing determinants, since the determinants a process needs can be piggybacked on the message that makes it causally dependent on those determinants. Finally, in causal logging if the number of possible concurrent failures is bound to be not greater than a value \(f\), it is possible to implement stable storage while avoiding disk access by logging to \(f + 1\) processes [5]. In this case, one process may avoid sending its determinants to processes that have not logged them, if enough processes have already logged them for them to be considered stable. \[ \forall e : \Box((|\text{Log}(e)| \leq f) \implies ((\text{Depend}(e) \subseteq \text{Log}(e)) \land (\Box(\text{Depend}(e) = \text{Log}(e))))) \] (3) 4 DEALING WITH NONDETERMINISM Non-determinism causes a lot of issues with the recovery of streaming topologies. The main issue arises when, upon failure and recovery, one needs to deduplicate records which have been generated twice, during replay. If the recovering operator (the one producing the duplicates) is deterministic, downstream operators can simply eliminate the duplicate records, because they know they have received them before. However, if the recovering operator is non-deterministic, it means that upon recovery, it may generate different records and/or in a different order. In that case, the downstream operators cannot correctly eliminate duplicates as they cannot distinguish them from non-duplicates. This is a very simplistic example of the relationship of local recovery schemes and determinism. \(\underline{\text{Not to be confused with stream events which are used interchangeably with records in database research nomenclature.}}\) Clonos is the first local recovery scheme to offer exactly-once processing guarantees in the lack of determinism by tracking all sources of nondeterminism, and by leveraging causal logging. **Causal Logging for Stream Processing.** Clonos leverages causal logging [19] to address the issues of nondeterminism. Unlike message-passing systems, the dataflow operators that process a streaming query are multi-threaded, including threads for data processing, timers, networking, flushing, and receiving RPCs. Most of the different threads affect state and generate records at arbitrary system time that affect processing. In addition, a stream processing system offers operations that rely on system or processing time, such as processing time windows. All of these nondeterministic computations and functions need to be controlled in order to provide replayable job executions in a streaming system. In the rest of this section, we analyze the sources of nondeterminism (Section 4.1), and elaborate how we deal with them (Section 4.2) including what we term causal services – a programming abstraction to support nondeterminism for system programmers but also to users authoring UDFs. In Section 4.3 we present the causal log and in Section 5 we discuss how Clonos guarantees exactly-once processing. Figure 3 depicts the concepts discussed this section. ### 4.1 Sources of Nondeterminism We now exhaustively list the sources of nondeterminism that can be found in most modern stream processing systems. **Windowing & Time-Sensitive Computations.** Streaming computations very often manipulate the inherent time dimension of data, which is based on event-, processing-, or ingestion-time. Of those, processing-time and ingestion-time are nondeterministic because they rely on the local system time at the operator where they are being processed. More specifically, when processing ingestion-time windows, the source operator simply adds a field in the record marking when that record entered the system. Upon a failure and replay, the ingestion time will change (the system time at the sources has changed), and windowing computations may not return the same results. The same holds for processing-time windows, which, instead of taking into account the ingestion time of records, simply trigger in periodic moments in time using timers, based on the local clock of the windowing operator. **Event-Time Windows & Out-Of-Order Processing.** Event-time is quite different to processing-time. It is the time the records are generated in the input sources (e.g., sensors and mobile devices). In its simple form it is deterministic: no matter how many times one replays a stream, the event-time of each record does not change. However, event-time introduces another complexity: the possibility of records arriving from input sources out-of-order due to network congestion or other reasons [41]. Streaming systems like Google Dataflow, Apache Beam & Flink accept out-of-order events up to a **lateness bound** based on a low-watermark [34]: a marker generated at the input sources according to wall clock time that is then embedded in the data stream. Since low-watermarks are generated according to wall clock time, using timers, they are nondeterministic. **Timers.** Timers are programmatic hooks which can be set to execute at some point in the future. Both the system and users can register timers. The triggering of timers is controlled by a timer thread, and the interleaving of operations between two threads is nondeterministic. **User-Defined Functions & External Calls.** User-defined functions are not sandboxed: they are allowed to call external services, reach external key-value stores, and also make other asynchronous calls. Every interaction with the outside world is not expected to be deterministic. Consider, for example, a call to an external database that queries the current stock price; this can change at any point in time. As a result, calling external services cannot be considered deterministic and, during recovery, computations can change. **Random Numbers.** Users may want to use random numbers in operations. Pseudo-random number generators are typically initialized using the current time producing nondeterministic results. **Keyed Streams & Record Arrival Order.** To parallelize and group streams, it is common to partition them using a partitioning key. We refer to such a stream as a keyed stream. Downstream operators (e.g. a reduce operator) receive inputs from multiple upstream operators, on a per-key basis. The issue here is that, depending on the network speed, the connection between various operators, etc., the order in which records arrive is not always the same upon recovery. A lot of times, operators are order-sensitive; in that case, the operator will also generate records in a different order than the one before the failure happened. In other words, operators that process multiple inputs are not deterministic. To make it deterministic, we need to fix the order in which records are replayed on recovery (see Section 4.2). **Checkpoints & Received RPCs.** Checkpoint-based fault tolerance protocols inject checkpoint barriers into the dataflow graph that instruct operators to checkpoint their state as they pass through them. Those barriers are injected in the dataflow graph through an RPC according to the system time of the job manager (e.g., every 10 seconds). Any RPC received by a task which affects its state is nondeterministic. **Output Buffers.** Records are grouped into buffers before they are sent downstream. Buffers are sent either when they are full or when the downstream task demands a buffer. In the latter case, the buffer might not be complete. Thus, the decision of whether the buffer will be split or not is very time-dependent and also depends on the request coming from downstream. This introduces nondeterminism in the size of buffers sent downstream that needs to be taken care of, such that the buffers can be retransmitted in the very same way during recovery. ### 4.2 Abstracting Nondeterminism with Services To hide the complexity of causal logging and recovery from users that code user-defined functions (UDFs), operators have access to “causal services” that abstract the complexity away. For instance, assume that in Figure 3 a user-defined function calls the `Timestamp` service, which returns timestamps. Under normal operation, the service generates a nondeterministic timestamp and appends it in the causal log. During recovery, when the user-defined function requests a timestamp from the `Timestamp` service (shown in Listing 1), the service will instead return a timestamp read from the --- <sup>3</sup>To handle backpressure, streaming engines allow downstream operators to either pause transmission or force the transmission of a buffer as soon as it contains a record. causal log. Users can register their own nondeterministic computations in Clonos by providing an anonymous function as in Listing 2. Determinants and causal logging, as well as recovery in all cases are done transparently. Behind the scenes, Clonos applies the anonymous function as Listing 3 shows. We describe the built-in causal services below. **Record Processing Order.** The Order service is an internal service (not exposed to users), which logs the order in which input records are processed. For performance, this is done at the level of buffers, and each buffer is fully processed before the next is deserialized. **Timers & Received RPCs.** Timers fire asynchronously to the main thread, thus their recovery is more complex. We first introduce unique IDs to every timer callback function. Then, we modify timer internals to register a “TimerFired” determinant in the causal log, containing its ID and stream offset at which it fired. During recovery, if a “TimerFired” determinant is encountered, we wait for the same stream offset to be reached. We then use the timer ID to obtain and execute the corresponding callback. RPCs received by an operator are treated similarly. **Wall-Clock Time.** When the Timestamp service is used to retrieve wall-clock time under normal operation, the service retrieves a timestamp from the system and logs it prior to returning it to the user. During recovery, the same service will return the logged timestamps instead of a fresh wall-clock timestamp. Since this service may be called multiple times per millisecond, if the time granularity allows it (e.g., asking ms-granularity timestamps multiple times within the same ms), instead of generating a new timestamp on every call, this service utilizes timers to only update a stored timestamp periodically (each ms in this case). In between updates, the service simply returns its cached timestamp. This reduces the amount of determinants generated by two orders of magnitude without a large loss in time granularity. **Calls to External Systems.** Calls to external systems must be done through causal services (e.g. the HTTP service), which persistently record the response in the log. The response can then be deserialized from the log during recovery. **Random Numbers.** Instead of storing the numbers generated, the RNG service generates a new random seed on every checkpoint and stores it in the log. During recovery, the seed is read from the log and the numbers generated can then be deterministically reproduced. ### Listing 3: Internal causal service logic. 4.3 Causal Log The causal log stores the determinants for every nondeterministic event executed by a task. It is split in two parts. There is a causal log for the main thread of a task and a separate causal log for each of the output channels in that operator. In a typical message passing system with a single thread of execution, causal logging [19] would require maintaining only one log generated by that single thread of execution. However in a typical scale-out streaming system, the main processing thread is separate from the network threads for performance, and they communicate through shared data structures. As the main thread writes to an output buffer, the output queue may decide to send the non-full (nondeterministically sized) buffer downstream. Thus, each queue has a causal log, where the size of buffers sent is recorded (Figure 3). This log is used during recovery for deduplication. **All Buffers Carry Determinants.** Whenever a buffer of data is sent downstream, a causal log delta piggybacked on that buffer. The delta contains all the entries of the output queue logs and the main thread log since the last buffer dispatch. Note that the main thread log is essentially replicated to all downstream operators, as formally required by causal logging [19]. The idea behind this is that whenever a downstream operator receives determinants, those should be able to fully restore the upstream operator. **Replicating Determinants to Downstream Tasks.** The downstream task, upon receiving the buffer and the delta of the two logs, appends those updates to the corresponding task causal log. In this way, before data is allowed to affect the state, the causal information necessary to recover it is already stored. In order to be able to afford two successive tasks failing, one might also want to replicate the determinants of each task to a deeper sharing depth. **Truncating Causal Logs.** The causal log is organized in segments according to epochs and is truncated whenever a checkpoint completes; the causal log is only needed in the middle of an epoch, when a local recovery has to complete using in-flight logs and the older checkpoint as we describe in the next section. 5 EXACTLY-ONCE RECOVERY In this section we show how Clonos deals with recovery and how it guarantees exactly-once processing with local recovery, using a causal log and in-flight records. In Section 5.5 we describe how we could extend Clonos to guarantee exactly-once output. 5.1 Lineage-based Replay When a new task replaces a failed task it needs to process the records of the current checkpointing epoch. Therefore, it requests from its upstream tasks to replay their in-flight log record. Upon the in-flight log request, upstream tasks start to replay the buffers contained in their in-flight log, in the same order they were dispatched prior to failure. The replay protocol of Clonos is based on lineage. If a task does not have an in-flight record log to replay for a downstream task (typically because itself just recovered from a failure), it will ask its upstream tasks to replay their in-flight log record. This lineage-based process can reach recursively the operator graph all the way up to the input sources, which we assume to be available to provide their input on demand. 5.2 Determinant-based Deduplication When recovering a task, the task replays the received in-flight records and produces output. Achieving exactly-once processing when performing local recovery requires deduplication after replay. In prior work [22], such deduplication is rather simple: each operator is considered to be deterministic, and all produced records bear a logical timestamp. The downstream operator can simply discard the records bearing the already seen logical timestamps. However, receiver-based deduplication wastes bandwidth. Instead, deduplication in Clonos is done in two concurrent steps. First, as the main processing thread recovers, it uses its causal log to produce the exact same output records. Concurrently, the network channel threads use their causal logs, which contain only information about the size of buffers received downstream, to reconstruct the same buffers as sent before. 5.3 Correctness of Recovery Scheme In the following, we analyze the conditions under which recovery can be performed using determinants depending on the depth to which determinants are shared. The correctness of causal logging as a rollback recovery approach has been formally proven in the past [4, 6]. Since Clonos tracks nondeterminism for multiple threads (the main processing thread and one thread per output channel), we model each thread as a process and recover them in unison. Thus, the proofs applicable to pure causal logging trivially extend to Clonos. However, ensuring exactly-once processing when locally recovering a failed operator remains open; we show Clonos guarantees it in the following paragraphs. We base our reasoning on exhaustively enumerating the different states that the recovery mechanism can reach, depending on the determinant replication strategy and different failure scenarios. Our aim is to show that independently of: i) how the determinants are shared with downstream operators, and ii) which failure scenario takes place, there is a mechanism to recover the topology with exactly-once processing guarantees. This is done either by retrieving determinants and deduplicating using them or by falling back to restarting the complete dataflow graph as in reference [10]. Assume that in a DAG composed of $N$ tasks with a maximum depth $D$ (source tasks have a depth of zero) $\mathcal{F} \subseteq N$ fails. Clonos can be configured to use a determinant sharing depth (DSD) as large as the graph depth or smaller than the graph depth. The determinant sharing depth also defines the number of consecutive tasks that can fail concurrently without creating orphan tasks. For instance, a sharing depth of two, means that the determinants of a task $a$ are sent to the downstream task $b$ directly, and $b$ forwards the same determinants to its downstream tasks $c$ and $d$. If both $a$ and $b$ fail, we can recover them from the determinants that are stored by $c$ and $d$. In the following, we analyze the different recovery cases, as depicted in Figure 4. Case 1: $DSD = D$. We deal first with the case where the determinant sharing depth equals the depth of the dataflow graph, i.e., $DSD = D$. Note that in this configuration Clonos follows the condition stated in Equation 2. As such, determinants for a nondeterministic event $e$ whose effects have not yet been globally checkpointed, are propagated to all downstream processes. Determinants piggybacked on a buffer are logged by a task (processed by the causal log manager) before the operator state becomes dependent on them (before the operator processes the buffer’s records), and as such at no moment do we break the condition that $\text{Depend}(e) \subseteq \text{Log}(e)$. Two failure cases can occur: - $\text{Log}(e) \nsubseteq \mathcal{F}$: Since the condition $\text{Depend}(e) \subseteq \text{Log}(e)$ also holds, then no surviving process depends on $e$, meaning that --- Figure 4: Exhaustive list of failure cases & DSDs with the recovery scenarios that need to be followed in each case. a different execution path may be taken without breaking consistency or the always-no-orphans condition. - \( \log(e) \not\preceq F \): At least one surviving process has the determinant of event \( e \), in which case it guides the recovery, either by ensuring the main thread follows the correct execution path or by ensuring an output thread deduplicates a buffer and thus the records it contains. Translating this to stream processing: this case can only happen when for the failure of a given task, all downstream tasks also fail, as otherwise, downstream tasks will have the necessary determinants to bring the failed tasks into a consistent state with the surviving downstream tasks. The extreme case happens when \( F = N \), in which case no task is dependent on any other and recovery is effectively equivalent to restoring a global checkpoint and beginning replay from the graph’s input sources. Case 2: \( DSD < D \). In the case where the determinant sharing depth is less than the depth of the dataflow graph, Clonos follows the condition of Equation 3 by not sharing \( e \)'s determinant to a depth greater than \( DSD \). In this case, there is the possibility that \( \log(e) \not\subseteq F \). Dependent \( e \), meaning that some orphaned process remains. When one of the orphaned processes receives a determinant log request from a recovering task for a log it does not have, it will escalate this to the JobManager, which will trigger a full rollback of the \( \text{DAG} \), thus achieving exactly-once processing guarantees. The alternative case is that \( \log(e) \subseteq F \), in which case at least one surviving task has the determinants of nondeterministic event \( e \), and can guide the recovery of the failed tasks which depend on it. Summarizing, the recovery cases depicted in the leaves of the trees in Figure 4, show that there are cases i) when the determinants are not required for recovery, ii) when determinants are required and can be found in some surviving task, and, finally iii) (the worst case) when the topology can recover with a global rollback recovery mechanism. 5.4 Trading Correctness for Performance Clonos is flexibly configurable in terms of its fault tolerance guarantees. By combining its different building blocks, it can achieve different processing guarantees, as follows. At-most-once. By disabling both in-flight logging and causal logging/determinants, failed tasks will be recovered with gap recovery [29], leading to inconsistent state with at-most-once processing guarantees, but incurring very little overhead. At-least-once. By setting the determinant sharing depth \( DSD = 0 \), only in-flight logging is enabled, and failed tasks are recovered with divergent rollback recovery, achieving at-least-once processing guarantees with very little overhead due to Clonos’ no-copy in-flight log (Section 6.1). Exactly-once. By enabling causal logging it is possible to perform consistent recovery on failed tasks, providing exactly-once processing guarantees, again with little overhead. If the overhead of causal logging becomes a concern, Clonos can also trade-off determinant sharing depth for performance. The determinant sharing depth is set to the depth of the graph by default, but by lowering it to another number \( f \), the determinant sharing overhead is reduced in exchange for supporting at most \( f \) concurrent consecutive failures. In this case, if a larger than \( f \) number of failures happens, Clonos can again be configured to favour either i) availability with at-least-once guarantees (skips deduplication step), or ii) consistency by falling back to recovery using the latest global checkpoint [11]. 5.5 Achieving Exactly-once Output There are two common methods for achieving exactly-once output in stream processing systems. The first solution is idempotent sinks [1, 7, 8] and the second is transactional sinks [8, 10]. The idempotent sinks do not work in the face of nondeterminism, while the transactional sinks introduce latency proportional to the checkpoint interval. Clonos, can be trivially extended to achieve exactly-once output by piggybacking serialized determinants on records sent to downstream systems (e.g. Kafka). This downstream system has to store these determinants, and be able to return them when requested. The determinants of a previous epoch can be truncated after each checkpoint. In this way, Clonos can achieve very low-latency exactly-once output since the outputs can be consumed already by external systems without having to wait for a checkpoint to complete and the transactional sinks to perform a two-phase commit. 6 SYSTEM DESIGN DECISIONS In this section we detail the interesting and non-trivial design decisions of the various building blocks comprising Clonos. 6.1 In-flight Record Log Clonos stores in-flight records in each task that sends its output to other tasks downstream. Because an upstream task may send records to multiple tasks downstream, the records are logged by output channel (partition), which corresponds to a specific connection with a downstream task. To optimize throughput, Flink sends records downstream, serialized in network buffers. Clonos logs these buffers in the in-flight log before they are sent. Avoiding Buffer Copies. Normally, when a buffer is sent over the network, it needs to return to the buffer pool of the output channel, and be recycled. However, the in-flight log also needs to store that buffer. One choice would be to copy it over, and then recycle the buffer. However, to avoid copying buffers, whenever a buffer is dispatched from the network layer downstream, the output channel simply hands over that buffer to the in-flight record log. This, however, can cause deadlocks: the output channels could be waiting for buffers to become available in order to serialize output records, but no buffer would be available if they would all be used by the in-flight log. Large Buffer Pools & Backpressure Delay. After going through multiple design and implementation iterations optimizing throughput and latency, we opted for the following strategy. As seen in Figure 1, each channel maintains two buffer pools. One buffer pool serves the output channels and the other buffer pool serves the in-flight log. When the network layer hands over a buffer to the in-flight log, in exchange, the in-flight log hands over an empty buffer to the buffer pool of the output channel. Interestingly, in our experiments we have seen that a network connection between \[ \text{This is also known as the output commit problem [21]} \] two operators needs around 10 buffers per channel - not more. Adding more buffers to output channels might look rational but it has an important side effect. It breaks the natural backpressure mechanism. The more buffers available for output, the slower the reaction of upstream operators to slowdowns from downstream operators, delaying the backpressure messages to propagate back to the sources. That is precisely the reason why Apache Flink, by default, uses a very small buffer pool for output. Clonos, however, has to address an additional issue owed to the small number of buffers available to the output queue. While a task upstream of a failure replays buffers to the recovering task downstream, its main processing thread continues to produce records that very quickly fill the buffers available to the output queue as those buffers cannot be sent before the replay completes. With no buffers available processing stops for all output partitions/channels of the task. This issue conflicts the philosophy of Clonos that the system should never stop making consistent progress. We solved it by placing the buffers at the back of the in-flight log even though they were still unspent. This is allowed because if the downstream is failed, then we are guaranteed to replay them at a later time. **Spilling to Disk.** Our in-flight log is segmented into epochs, and whenever a checkpoint completes successfully, the in-flight log is truncated up to that checkpoint, making the data buffers available in its local buffer pool. The in-flight record logs are kept in memory by default. Depending on the checkpoint frequency and input throughput pace, the in-flight log may grow beyond the size of the log’s buffer pool leading to blocked processing and backpressure. To counteract this issue, we introduced an asynchronously spilling in-flight log, that persists buffers to disk (Figure 1), recycling them whenever necessary. The spilling in-flight log transitions seamlessly from on-disk buffers to in-memory buffers and prefetches on-disk buffers to speed up the replay process. It functions according to the following four (configurable) policies. - **In-memory:** keep all buffers in memory. - **Spill-epoch:** spill each epoch as soon as the next one starts. - **Spill-buffer:** spill each buffer as it arrives. - **Spill-threshold:** spill all buffers whenever the buffer pool’s ratio of available buffers drops below a configurable fraction. The in-memory and spill-epoch policies both suffer from the possibility of blocking processing when the checkpoint interval is too large. Instead, the spill-buffer approach entails additional synchronous work that creates increased overhead and lacks batching of I/O operations. The spill-threshold approach offers a well-rounded solution to the above issues. ### 6.2 Network Channel Reconfiguration Clonos applies reconfiguration of network channels dynamically in order to introduce a new task in the topology. Once the new task receives the acknowledgment from an upstream task, it requests to establish a persistent network connection with its upstream tasks. After a new connection has been setup, the lineage-based replay protocol can begin. We found it particularly challenging to re-engineer the network stack in order to establish connections of tasks while jobs were executing. The main issue was to align network buffers and counters that match buffer sequence ids. In addition, record deserializers per input channel often keep state from one buffer to the next as they wait to receive the remaining part of a record with the next buffer. ### 6.3 Standby Tasks Each standby task mirrors a running task. It contains the same processing logic and stores the same type of state as the one it mirrors. If a running task fails, its corresponding standby task substitutes it. In contrast to a running task, its standby task remains idle unless it is commanded to run. The allocation strategy of standby tasks underlies an important tradeoff between resource utilization and failure safety, even performance. By controlling the affinity and anti-affinity of standby tasks’ allocation, stream processing jobs can tune the amount of compute nodes they utilize for standby tasks. Each saving in resource utilization directly reduces Clonos’ safety guarantees since co-locating two or more standby tasks on the same node makes Clonos more susceptible to a potential failure of that node. Performance is another factor to weigh in when deciding the placement of standby tasks, i.e., their allocation strategy. Depending on a job’s processing, co-locating two specific tasks may be critical for performance. If performance optimization is more important than failure safety, a job may choose to co-locate the corresponding standby tasks. By default, Clonos allocates standby tasks using the same allocation strategy provided by a job for the running tasks. ### 6.4 State Snapshot Dispatch Similarly to related work [29, 33], Clonos transfers the state snapshot of each running task to its corresponding standby task once a checkpoint is complete. Clonos’ state snapshot dispatch can leverage the various approaches offered by the underlying system, such as direct transfer to the local disk of the standby task via a file url or transfer to a shared file system. In addition, if the state backend supports incremental checkpoints then the cost of dispatching state depends on the state’s delta instead of its absolute size. By receiving state snapshots regularly, standby tasks are behind their running counterparts only by a checkpoint or less. It is important to note that the state transfer process is bound by checkpoint frequency and checkpoint duration, which depends on the state size. A state snapshot should not take longer to dispatch to a standby task than the job’s checkpoint frequency. In practice, however, this can be avoided if concurrent checkpoints are never performed. Under these assumptions, a checkpoint is guaranteed to complete before the next one begins and state transfer is expected to complete before the next checkpoint’s completion when using a distributed file system. Finally, if a standby task is called to run while a state snapshot is in transit Clonos will wait for the transfer to complete before starting the execution of the standby task. ### 7 EXPERIMENTAL EVALUATION In this section, we first present our experimental methodology for running two categories of experiments: overhead experiments where we measure the overhead of Clonos in terms of throughput and latency under normal operation, and failure experiments where we study Clonos’ fault recovery. In both cases, we compare with Flink, the engine on which our changes were introduced. We evaluate Clonos on a Kubernetes cluster hosted on a Cloud environment. The Kubernetes cluster hosts a 3-node Kafka cluster, which serves both as the data source and data sink of the failure experiments. An HDFS deployment, with a single NameNode and three datanodes, stores the operators’ checkpoints. Finally, the Kubernetes cluster hosts a Flink cluster with 150 TaskManagers, each containing a single task slot. Each TaskManager has access to 2GB of memory, and two processing cores. A given configuration’s throughput is measured by sampling the Kafka cluster three times per second for the number of records in the output topic. Dividing the number of new records by the elapsed time, we obtain real-time throughput. A given configuration’s latency is measured by sampling the output Kafka topics from each job and computing the output records’ latency. Finally, we configure Flink to offer the fastest possible recovery, so as to provide a fair comparison. This means lowering the failure detection parameters to values not recommended for use in production. In particular, heartbeats are sent every 4 (default: 10) seconds, timing out after 6 (default: 60) seconds. 7.2 Workloads **Nexmark.** Since Clonos can be a drop-in replacement for Flink jobs, we used the Nexmark [44] benchmark, along with the extra queries implemented by the Apache Beam project. To enable this we implemented a Clonos runner for Apache Beam. Nexmark includes queries that perform filtering, joins, aggregates, complex windowing, etc. and serves as a benchmark for evaluating stream processing engines. We have excluded Q10 from the benchmark because it requires access to Google’s GCP service. **Synthetic.** We also use a synthetic workload to be able to evaluate Clonos under configurable scenarios, not found in Nexmark and to avoid optimizations such as operator fusion. This way, for each operator, there is an extra layer of depth for which Clonos pays full network and serialization costs of determinants. For the synthetic experiments presented, we inject to Clonos multiple sequential failures, either concurrently or in intervals. In the interest of space, we only include a subset of our results. ### 7.3 Overhead Under Normal Operation In this series of experiments we observe the performance of Clonos under normal operation, i.e., without failures, and quantify runtime overheads. We execute the complete Nexmark benchmark queries setting the degree of parallelism of each operator to 25, meaning that the different jobs occupy between 25 (3 operator stages for the simplest queries such as Q1-2) and up to 150 CPU cores (6 stages for Q7). Operator fusion is turned on. In the interest of space, we do not plot latency measurements as we observed those to be stable and comparable to Flink’s latency throughout our overhead experiments with a notable difference: the tail latency in the case of DSD=Full can be up to 20% worse (ca. 25ms) than vanilla Flink. For DSD=1 we have noticed an overhead of less than 10% in the worst case. Figure 5 depicts the overhead of Clonos on throughput. First, we see that for simple queries such as Q1-Q2 which are implemented with simple `map` & `filter` operators (D=1) are not affected by the overhead that comes with Clonos, such as in-flight logging. In fact, such a small difference in throughput can easily be also attributed to the effects of the underlying infrastructure. The most complex queries are Q5 and Q7 which are implemented using an aggregation tree to handle skewed keys, and they also perform windowed aggregates. For both queries we observe that, since their depth D=6, the “Full” determinant (i.e., DSD=6) sharing has a high impact on throughput: up to 26%. However, a more reasonable DSD=1 or 2, yields around 15-16% overhead in throughput. We find this penalty in throughput reasonable, considering the benefits of Clonos’ fast recovery times (next Section) and its ability to deal with non-deterministic operators. Finally, throughout the whole benchmark, we have observed an average penalty of 7% for DSD=Full and 6% for DSD=1 compared to vanilla Flink. ### 7.4 Clonos Under Failure Scenarios For failure experiments we chose to present detailed throughput and latency metrics for two of the most interesting Nexmark queries: Q3, and Q8. In addition, we evaluate Clonos against Flink on multiple and concurrent failure scenarios using a synthetic workload. **Recovery Time.** We define recovery time to be the time between the instant that a failure takes place and the instant that the recovering system’s observed latency has returned to values within 10% of the pre-failure latency. This metric is used to evaluate a mechanism’s ability to recover fast from a failure. Note that this metric also includes the time that a system needs to catch up with the input stream. Although Clonos is operational in less than a second, a lot of practical use-cases (e.g., credit card fraud detection) require that the system, after recovery, can also catch up with the input stream throughput and get back on track in order to process data as soon as it becomes available. What is the performance of Clonos with respect to latency and throughput in the presence of single-operator failures? **Nexmark.** We focus on Q3 and Q8. Q3 performs a full history join and filtering operations, while Q8 performs a windowed join, which explains the throughput spikes as we measure throughput at the job output sinks. We have also experimented with Q4, Q5, and Q7 since they are the most complex queries, but those produce very few output records and they were inappropriate to plot and exemplify proper recovery times. In order to observe end-to-end latency, a regular amount of output records must be generated. Figure ?? shows that Clonos recovers within 10s by leveraging standby operators and local recovery. After a sub-second switch to the standby operator, replaying the lost epoch took roughly 10s at which point a small number of queued records were emitted with 10s latency, before the system could catch up. During this time, the alive tasks continue operating under regular latency. Flink, however, loses availability on all tasks and takes at least 87s to recover and catch up. In addition, different output partitions recover at different speeds. This is indicated by the different lines of points visible in the plot. In Figure ?? we inject a failure to the join operator. Clonos recovers within 3s. Note that since we measure latency on the output records (end-to-end latency) the visible points arranged vertically signify records of different arrival times in their respective windows. The window range also explains the empty spots in the figure as the window fires every 10s. Flink, on the other hand, takes more than 72s to fully recover. In terms of throughput, Figure ?? depicts Clonos’ ability to instantly recover the job’s original throughput, while Flink experiences a downtime of multiple seconds and a turbulent recovery. Notice how Clonos’ throughput is barely affected following the failure. We can observe similar behavior in Figure ??. What is the performance of Clonos with respect to latency and throughput in the presence of multiple failures? We perform our multiple and concurrent failure experiments at parallelism 5, operator graph depth 5, checkpoint interval 5 seconds, and per-operator state size of 100 MB. Specifically, Figures ?? and ?? depict an experiment where there are three failures with a 5-second interval, while Figures ?? and ?? depict an experiment with three concurrent failures. The failures are sequenced, meaning the failed operators have connected dataflows. We observe that independently of the frequency of failures (whether they are staggered or concurrent), Clonos’ recovery behaves similarly. Before the downstream failures can be recovered, the upstream failures must finish recovering, such that they can replay their in-flight logs. Only partial throughput is lost during recovery, as records continue to flow through causally unaffected paths even though shuffle connections are used. Similarly, latency is only increased on a small subset of records flowing along causally affected paths and latency quickly returns to its pre-failure value. 7.5 Memory Usage The memory usage of Clonos is completely bound by the size of the buffer pools configured (Section 6.1). We have experimented with different memory sizes and spill strategies for the storage of the in-flight record log as well as determinants. We have observed that while the spill-buffer strategy is much more conservative memory-wise, it leads to poorer and less predictable performance. The spill-threshold strategy presents deteriorating performance under 50MBs. of space and has diminishing returns above 80MBs. Thus, all experiments used 80MBs of in-flight log space per task. When the in-flight record log would become larger than the available memory, the log spills buffers to disk. Since both reading and writing to it have a sequential access pattern, the “spill-threshold” strategy (Section 6.1) yielded the best results. The size of the determinant buffer pool has no effect on performance, but too small of a buffer pool may lead to deadlocks. Experimentally, we have found that for DSD=1 a determinant buffer pool of size 5MB is more than sufficient for most workloads. When DSD=Full this value must be increased as D grows, as more logs are replicated. 8 RELATED WORK Our contributions are related to fault tolerance, high availability, and causal logging. An elaborate study of fault tolerance and high-availability in stream processing is provided in a survey [24]. 8.1 Fault Tolerance A number of early stream processing systems provided fault tolerance, such as Aurora [16] and Borealis [9]. However, most fault tolerance approaches of the time did not recover a system-wide consistent state with very few exceptions [39]. More recent systems like Apache Flink [12], IBM Streams [30], and Microsoft Trill [14], achieve consistent exactly-once fault tolerance with global rollback recovery as described in Section 3. Other systems, such as Storm [43], Heron [32], and Samza [37], implement at-least-once consistency guarantee. Streaming systems to date increasingly try to handle failures locally, that is, without disrupting a job or regions of it, but only its failed components. Apache Spark [8] performs exactly-once local recovery but in a micro-batch processing model and assuming an idempotent sink that ignores already produced results on recovery. Consistent local recovery is offered by SEEP [22] and its extension based on stateful dataflow graphs (SDG) [7], Timestream [38], Streamscope [35], and Rhino [18]. However, none of these systems supports nondeterministic computations and they make strong assumptions about input order. The only stream processing system that delivers consistent local recovery and can support nondeterministic computations with minimal assumptions is Millwheel [3]. However, Millwheel performs a transaction per record per operator on Spanner [17]. Spanner, to achieve low latency, depends on atomic clocks to operate which do not exist in commodity clusters. Clonos can provide Millwheel’s guarantees and consistency on commodity hardware. Table 1 summarizes all systems’ determinism assumptions. 8.2 High Availability Existing work on high availability in stream processing [29] proposes active replication [9, 39], passive replication [27, 33], hybrid active-passive replication [26, 42], or models multiple approaches and evaluates them with simulated experiments [13, 29]. These approaches either constrain operator logic or support weaker than exactly-once consistency guarantees. Clonos delivers high availability based on passive replication by substituting only the failed tasks. At the same time Clonos maintains exactly-once consistency guarantees that cover nondeterministic computations using causal logging on a feature-rich production-grade system. 8.3 Causal Logging We presented causal logging [19, 21] in Section 3. We have elaborated both the system design and implementation aspects of the causal log in Section 6 and the nondeterministic aspects in Section 4. Among streaming systems, Timestream [38] and Streamscope [35] use an optimistic logging-inspired dependency tracking approach, which records input and output dependencies in computations and uses them to rebuild the state if needed. Instead, Clonos records all nondeterministic events and the order of execution. By additionally respecting the always-no-orphans condition, Clonos can guarantee consistent local recovery. Closest to the spirit of Clonos is lineage stash [46], which uses causal logging to provide exactly-once consistency with local recovery for nondeterministic operators. However, it does not support important nondeterministic functions in stream processing, such as timer-based services much needed for processing time windows and watermarks required for progress tracking and out-of-order data. In addition, it uses a micro-batch architecture while Clonos implements continuous data processing. Finally, Clonos also addresses issues of high availability with standby tasks, state shipping and reconfiguration. 9 CONCLUSIONS In this paper we presented Clonos, a fault-tolerance and high-availability method built in Apache Flink as a replacement for its current fault tolerance mechanisms. Clonos, to the best of our knowledge, is the first fault tolerance mechanism which is applicable to a real, production-grade system and achieves consistent local recovery, high availability, and the flexibility of nondeterministic computations. Clonos has been a substantial engineering effort within our team (more than 20K LOC), which still continues improving the overhead of causal logging. Our experiments so far have shown that Clonos can be competitive (5-24% overhead in throughput and latency) with the current fault-tolerance mechanism of Flink which is industry-proven and serves billions of events every day, in multiple industries. At the moment, we are extending our work into reducing the overhead of causal logging through compressed data structures and extending Clonos’ guarantees to low-latency exactly-once output. Acknowledgements. This work has been partially funded by the H2020 project OpertusMundi No. 870228, and the IC, I “AI for Fintech Lab” project. Experiments were carried out on the Dutch national e-infrastructure with the support of SURF Cooperative. REFERENCES free message logging protocols. In *FTCS-23 The Twenty-Third International Sym- [9] Magdalena Balazinska, Hari Balakrishnan, Samuel Madden, and Michael Stone- [14] Badrhard Chandramouli, Jonathan Goldstein, Mike Barnett, and James F. Ter- [17] James C. Corbett, Jeffrey Dean, Michael Epstein, Andrew Fikes, Christopher Frost, JF Furman, Sanjay Ghemawat, Andrey Gubarev, Christopher Heiser, Pe- [20] Jim Li, Kristin Tufte, Vladislav Shkapenyuk, Vassilis Papadimos, Theodore John- [29] Ankit Toshniwal, Siddharth Tanveja, Amit Shukla, Karthik Ramasamy, Jignesh M. Patel, Sanjeev Kulkarni, Jason Jackson, Krishna Gade, Maosong Fu, Jake Donham, Nikunz Bhagat, Sailesh Mittal, and Dimitry Ryaboy. 2014. Storm@Twitter. In
{"Source-Url": "http://pure.tudelft.nl/ws/portalfiles/portal/95346432/3448016.3457320.pdf", "len_cl100k_base": 13755, "olmocr-version": "0.1.53", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 48379, "total-output-tokens": 17239, "length": "2e13", "weborganizer": {"__label__adult": 0.0003325939178466797, "__label__art_design": 0.0004820823669433594, "__label__crime_law": 0.00034165382385253906, "__label__education_jobs": 0.0012788772583007812, "__label__entertainment": 0.00017631053924560547, "__label__fashion_beauty": 0.0001959800720214844, "__label__finance_business": 0.00063323974609375, "__label__food_dining": 0.00041866302490234375, "__label__games": 0.0007543563842773438, "__label__hardware": 0.0019254684448242188, "__label__health": 0.0005998611450195312, "__label__history": 0.0004763603210449219, "__label__home_hobbies": 0.00012105703353881836, "__label__industrial": 0.0007109642028808594, "__label__literature": 0.0004162788391113281, "__label__politics": 0.0004138946533203125, "__label__religion": 0.0005297660827636719, "__label__science_tech": 0.347412109375, "__label__social_life": 0.00013339519500732422, "__label__software": 0.029541015625, "__label__software_dev": 0.6123046875, "__label__sports_fitness": 0.00022995471954345703, "__label__transportation": 0.0006093978881835938, "__label__travel": 0.00021946430206298828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 74377, 0.0315]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 74377, 0.30318]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 74377, 0.90927]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 5798, false], [5798, 10480, null], [10480, 14878, null], [14878, 21880, null], [21880, 28762, null], [28762, 33541, null], [33541, 38648, null], [38648, 45263, null], [45263, 52012, null], [52012, 56839, null], [56839, 60771, null], [60771, 66553, null], [66553, 73548, null], [73548, 74377, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 5798, true], [5798, 10480, null], [10480, 14878, null], [14878, 21880, null], [21880, 28762, null], [28762, 33541, null], [33541, 38648, null], [38648, 45263, null], [45263, 52012, null], [52012, 56839, null], [56839, 60771, null], [60771, 66553, null], [66553, 73548, null], [73548, 74377, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 74377, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 74377, null]], "pdf_page_numbers": [[0, 0, 1], [0, 5798, 2], [5798, 10480, 3], [10480, 14878, 4], [14878, 21880, 5], [21880, 28762, 6], [28762, 33541, 7], [33541, 38648, 8], [38648, 45263, 9], [45263, 52012, 10], [52012, 56839, 11], [56839, 60771, 12], [60771, 66553, 13], [66553, 73548, 14], [73548, 74377, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 74377, 0.0]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
edc05fe2ca970854cda1160488563cb2874f3e7b
[REMOVED]
{"Source-Url": "http://vlsi.colorado.edu/papers/SaqibVMCAI2008.pdf", "len_cl100k_base": 10450, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 49565, "total-output-tokens": 14576, "length": "2e13", "weborganizer": {"__label__adult": 0.0008420944213867188, "__label__art_design": 0.0008330345153808594, "__label__crime_law": 0.0010976791381835938, "__label__education_jobs": 0.0014095306396484375, "__label__entertainment": 0.0003476142883300781, "__label__fashion_beauty": 0.0004737377166748047, "__label__finance_business": 0.0005636215209960938, "__label__food_dining": 0.00124359130859375, "__label__games": 0.01122283935546875, "__label__hardware": 0.002330780029296875, "__label__health": 0.0020389556884765625, "__label__history": 0.0009899139404296875, "__label__home_hobbies": 0.0002532005310058594, "__label__industrial": 0.0013828277587890625, "__label__literature": 0.0012226104736328125, "__label__politics": 0.000995635986328125, "__label__religion": 0.0012760162353515625, "__label__science_tech": 0.38671875, "__label__social_life": 0.0001829862594604492, "__label__software": 0.005435943603515625, "__label__software_dev": 0.576171875, "__label__sports_fitness": 0.0010175704956054688, "__label__transportation": 0.0015010833740234375, "__label__travel": 0.0004119873046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50868, 0.04611]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50868, 0.45601]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50868, 0.87084]], "google_gemma-3-12b-it_contains_pii": [[0, 2608, false], [2608, 6429, null], [6429, 10209, null], [10209, 14339, null], [14339, 18219, null], [18219, 21870, null], [21870, 25434, null], [25434, 29067, null], [29067, 32233, null], [32233, 35733, null], [35733, 38377, null], [38377, 41423, null], [41423, 44890, null], [44890, 48524, null], [48524, 50868, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2608, true], [2608, 6429, null], [6429, 10209, null], [10209, 14339, null], [14339, 18219, null], [18219, 21870, null], [21870, 25434, null], [25434, 29067, null], [29067, 32233, null], [32233, 35733, null], [35733, 38377, null], [38377, 41423, null], [41423, 44890, null], [44890, 48524, null], [48524, 50868, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50868, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50868, null]], "pdf_page_numbers": [[0, 2608, 1], [2608, 6429, 2], [6429, 10209, 3], [10209, 14339, 4], [14339, 18219, 5], [18219, 21870, 6], [21870, 25434, 7], [25434, 29067, 8], [29067, 32233, 9], [32233, 35733, 10], [35733, 38377, 11], [38377, 41423, 12], [41423, 44890, 13], [44890, 48524, 14], [48524, 50868, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50868, 0.05263]]}
olmocr_science_pdfs
2024-12-02
2024-12-02
0b82ecd36defc6781f70adeff5a56ebedf361fa6
Building the Better Macro: Best Practices for the Design of Reliable, Effective Tools Frank DiIorio, CodeCrafters, Inc., Philadelphia PA ABSTRACT The SAS® macro language has power and flexibility. When badly implemented, however, it demonstrates a chaos-inducing capacity unrivaled by other components of the SAS System. It can generate or supplement code for practically any type of SAS application, and is an essential part of the serious programmer's tool box. Collections of macro applications and utilities can prove invaluable to an organization wanting to routinize work flow and quickly react to new programming challenges. But the language's flexibility is also one of its implementation hazards. The syntax, while sometimes rather baroque, is reasonably straightforward and imposes relatively few spacing, documentation, and similar requirements on the programmer. In the absence of many rules imposed by the language, the result is often awkward and ineffective coding. Some amount of self-imposed structure must be used during the program design process, particularly when writing systems of interconnected applications. This paper presents a collection of macro design guidelines and coding best practices. It is written primarily for programmers who create systems of macro-based applications and utilities, but will also be useful to programmers just starting to become familiar with the language. INTRODUCTION Let's start with two possibly painfully familiar scenarios. In the first, we use a general-purpose macro to count observations in a dataset. Here is an excerpt: ```sas %do i = 1 %to &nDatasets.; %let dset = %scan(&dsList., &i.); %dsetN(data=dset., count=nobs) %if &nobs. > 0 %then %do; … ODS, PROC REPORT statements … %end; %end; ``` The program runs for minutes instead of seconds, repeatedly printing the first dataset in &DSLIST. We eventually trace the reason to macro %dsetN, which set the value of variable i to 1. Thus, when control returned to the calling macro, i was always 1, rather than the expected progression of 1, 2, …, &nDatasets. We tried to do the right thing (use general-purpose macros to reduce code volume) but instead became another example of the adage "no good deed goes unpunished." The second scenario has a better, but still imperfect, outcome. We use an application to create a SAS dataset, then use the output dataset in PROC PRINT. ```sas %missVars(data=clin.ae, out=ae_miss, keep=usubjid) proc print data=ae_miss; id usubjid; title "Variables in CLIN.AE that had all missing values"; run; ``` The macro ran successfully and created the dataset as described in the user documentation. What %missVars also did, however, was leave several datasets and global macro variables in the SAS session. These unwanted artifacts were not problematic in this use of the macro, but it is easy to see how unexpected datasets, option settings, and the like could have negative consequences. This paper presents a collection of macro design guidelines that prevent these and similar scenarios. The list does not pretend to be exhaustive, but it does give the macro programmer a set of tools to work with. We will see that subservience of individual style to a rigid set of guidelines is not necessary, and that all that is required by the reader is an awareness of what the guidelines are attempting to accomplish (and prevent!) KEYS TO GOOD DESIGN To understand why good design is important and how guidelines should be structured, think of a craftsman's work shop. It is a well-organized space, filled with specialized tools. Some of these are used every day. Others, while rarely used, are lifesavers when needed, filling a specialized requirement. They can make a nearly impossible task appear simple. So it is with the programmer's virtual workshop. We need specialized tools so we can build general-purpose applications. While our choice of tools is not completely encompassed by the macro language, macros are a significant asset in most SAS programming shops. Learning how to build them properly and how to use them effectively is an important step for any would-be master programmer. The collection of guidelines presented below attempts to present generalities rather than specifics. The goal throughout is to guide the reader's thinking about how to write professional tools that reduce program volume, increase program reliability, and make life in the virtual workshop more enjoyable. The list is not exhaustive, and some of the points are arguable. Simply review them with an open mind and with an eye toward how they might complement and improve your programming. 1. Know When a Macro Is and Is Not Necessary Macro programmers, especially those who have just learned the language, would do well to remember the adage, “to someone with a new hammer, everything looks like a nail.” In some situations the macro language is the only possible clean solution. In other situations, the macro language is used unnecessarily, and therefore needlessly complicates a program. To identify these, it's worthwhile to identify the macro language's strengths: - Repetitive tasks - Conditional execution of program fragments - Tasks that have varying execution paths and thus need to be parameterized Why is it even necessary to discuss this? Because simple code is always easier to understand and maintain than complex code, and the macro language lends itself to complex (sometimes overly complex) coding. If you can create an effective solution without the macro language, do so. Also, recognize that there are situations where the macro language may, after some reflection, not be required. Some of these situations include: - A simple, one-time program - A straightforward program with a small number of parameters (macro variables) where no conditional execution is needed - A DATA step using hash object or arrays - DATA steps using CALL EXECUTE rather than macro calls - Use of SQL or other programmatically generated statements - Macro variables and functions used outside a macro (%sysfunc, &sysDate, et al.) The list could go on and on. The important point here is that the macro language should not be the first and only tool you pick up as you enter the programming workshop. Being aware of SQL, DATA step, ODS, and other components of the language may result in a non-macro solution, and that's fine. All you do have to be is aware and informed. If, after evaluating the programming task at hand you decide that a macro-based solution is appropriate, that's fine too. (And it makes the rest of this paper worth reading!) 2. Conceptually Separate Utilities and Applications Even though the overall structure of macro utilities and applications is identical, it is helpful to conceptually separate them. Granularity of the programs' scope of work is the key difference. Some programs are narrowly focused and perform work that is independent of data and projects. The scope of other programs is broader, and possibly unique to a project or sub-system. The former are utilities, the latter are applications. Examples of macro utilities: - Count the number of observations in a data set - Convert one or more macro variables to upper case - Create a count and list of distinct values of a data set's variable - Quote each token in a macro variable - Check for the presence of one or more variables in a data set Examples of macro applications: - Print the first "n" observations for every data set in a library - Create a set of HTML files that display attributes for data sets and variables in a library - Check the contents of one or more data sets for compliance with internal (corporate) or external (legal, industry) standards The distinction is clear. Utilities perform generalized, tightly defined activities, while applications have a much broader range of activity. As a rule of thumb, the name of the utility macro should be a near-complete description of what it does. If %quoteList has code for both counting the number of items in a list and quoting the items, it may be better to isolate the item counting in a separate macro, leaving the bulk of the %quoteList program to focus on quoting. Since the nature of utilities is their potential for use by many applications, it is imperative that they "play well with..." others,” doing what was expected by the calling program, but no more. This point is developed at length later in this paper. This is not an artificial distinction. The development mindset is different for each, and they may have different validation and documentation requirements. Other features that vary between utilities and applications are the amount of error checking, and optimization of execution speed and other resources. It is worth briefly noting that the size of a macro, particularly utility macros, is irrelevant. Rules of thumb that say a macro should be “x” (60, 100, other) statements ignore a reality of macro construction: macros should liberally use other macros to get their work done. Placing a limit on size is arbitrary, counterproductive, and artificial. Most importantly, limits ignore the real-world reality that some things simply need a lot of code to get done correctly. 3. Clearly Document the Macro Documentation is written for two audiences: users of the macro and programmers who perform macro maintenance. Even the barest-boned macro must contain this essential content. 3.1 User Documentation. This is usually placed in a comment at the beginning of the macro. Note that this is the minimal documentation set. External documentation – web pages, sample libraries, and the like – frequently complement the internal comments. The amount and content of the header documentation vary, of course, by the complexity of the macro. This list identifies some common features: - **Macro Name.** File name and/or complete, relative path. - **Description.** Brief summary of the macro's functionality. - **Input.** Datasets, with access restriction if appropriate; parameters, clearly identified with their status as optional or required or acceptable values, default values. - **Outputs.** Complete description of output files, data sets, macro variables and other artifacts that are produced by the macro, distinguishing, if appropriate, between successful and unsuccessful or incomplete execution. This list, *and nothing else*, is how the user should expect the SAS session to change once execution is complete. Validation of the macro should confirm that all expected and no unexpected output is produced. - **Processing.** Identify the names of any intermediate SAS data sets, macro variables, and other artifacts that are produced by the macro. - **Execution.** Requirements such as running in open code, using a particular version of SAS, etc. The first executable statements in the macro should be those that test for these conditions. The macro should issue error messages and terminate if any of the requirements are not satisfied. - **Examples of Use.** Start with simple calls to the macro, taking as many default values as possible, followed by as many other, more complex examples as necessary. - **Error Conditions.** Items that would cause the macro to fail during: - Initialization: parameter values or inconsistencies; required resources that were not found - Execution: insufficient number of observations in a dataset; a variable that has no non-missing values - **History.** Identify the programmer name, the date of the change, a revision code used throughout the program so the “touches” can be easily identified, and a short description of the change. The history comments should also identify changes associated with versions and maintenance releases. The exact contents and format of the header may be prescribed by corporate or other standards. The contents will evolve over time, not just due to enhancements and fixes, but also due to usage. As end-users and programmers use the macro, it is likely that gaps, unclear wording, and the like will surface. As they emerge, they should be remedied. Creating the header comment can be at least somewhat automated. The UltraEdit text editor, for example, allows definition of templates. They may contain boilerplate text as well as special text sequences for insertion of the file name, extension, and date. This is shown below. Other text editors have similar capabilities. They should be utilized whenever possible. <table> <thead> <tr> <th>Template (prefilled items in bold)</th> <th>Inserted text (resolved text in bold)</th> </tr> </thead> <tbody> <tr> <td>&quot;</td> <td>&quot;</td> </tr> <tr> <td>Name: [FILE_NAME]</td> <td>Name: quoteList.sas</td> </tr> <tr> <td>Description:</td> <td>Description:</td> </tr> <tr> <td>Input:</td> <td>Input:</td> </tr> <tr> <td>Output:</td> <td>Output:</td> </tr> <tr> <td>Usage Notes:</td> <td>Usage Notes:</td> </tr> <tr> <td>References:</td> <td>References:</td> </tr> <tr> <td>History: Date Init Comments</td> <td>History: Date Init Comments</td> </tr> <tr> <td></td> <td></td> </tr> <tr> <td><code>dd[DATE_USER_END] YYYY - 'MM'</code></td> <td></td> </tr> <tr> <td>FCD Initial release</td> <td></td> </tr> <tr> <td>2007-10-22 FCD Initial release</td> <td></td> </tr> <tr> <td>*/</td> <td>*/</td> </tr> </tbody> </table> Foundations and Fundamentals NESUG 2010 3.2 Programmer Documentation. The other audience, the macro programmer, is interested in aspects of the program not covered by the user-oriented instructions found in the header comment. In addition to being concerned with the inputs and outputs described in the header comment, the programmer also needs to know how processing takes place. This need is at least partially satisfied by inserting comments throughout the program. Ideally, a programmer who has never seen the macro could review it and see the following at a glance: - The start of major processing blocks - Discussions of sections that were problematic to write or required non-standard or non-obvious algorithms - Revision codes (these are discussed at length in Guideline 8, below) These uses of comments are demonstrated below: ```plaintext /*** Verify parameters *****/ %if condition %then %do; %let OKflag = f; %put error message; %end; /*** Per-dataset loop *****/ %do i = 1 %to &dsetN.; %let dataset = %scan(&datasetList., &i.); /* Use ANYMISS to identify obs with all missing values */ %anyMiss(data=&dataset., missing=allMissing) %if &allMissing. ^= %then %do; /* [U03] Add test for ALLMISSING */ /* Important! PDF hyperlink workaround per track # 5607646 (2007/11/10) */ /* write to PDF ... */ %end; %end; /*** Clean up *****/ proc delete data=datasetList; run; ``` Just as there is no "right" or "wrong" header comment, so too is there no single "correct" form of the programmer's documentation. The objective is simply to make the macro as readable as possible. Judicious use of comments, together with external documentation such as flow charts and other data diagramming techniques, should give the programmer a clear idea of the code's structure and functionality. 4. Use Keyword Parameters Consistently In macro design, as anywhere else in programming, designing for usability benefits anyone touching the program: users, developers, validation programmers, maintenance programmers, and so on. One of the simplest ways to make the macro user-friendly is using keyword parameters. 4.1 Parameters versus Keywords. Consider this macro call: ```plaintext %xpt(ae cm dm, 100, , delete) ``` Somehow the user knows that %xpt should process data sets AE, CM, and DM, that the resulting files should be broken into 100MB pieces, and that preexisting versions of the files should be deleted at the start of processing. The user also knew that the deletion option is the fourth parameter, necessitating the use of a placeholder comma. It is a lot to ask of the user. No matter how functional and helpful it may be, the macro’s invocation is decidedly "user-unfriendly," since no one wants to remember the order or meaning of parameters. Its design imposes requirements that encourage incorrect usage. Consider this alternative for the keyword parameter: ```plaintext %macro xpt(data=, split=50, report=no, delete=no) ``` These calls to the macro are equivalent to the earlier, positional parameter version. All have the advantage of being more readable: ```plaintext %xpt(data=ae cm dm, delete=yes, split=100) %xpt(delete=yes, data=ae cm dm, split=100) ``` The user doesn’t have to worry about parameter order, and the program becomes somewhat self-documenting by having parameters and their values present in the call to %xpt. 4.2 Naming and Values. Just as the presence of keyword parameters makes the macro more usable, so does consistency of how they are named and given values within a system of macros. Parameters that identify similar actions or content should be named consistently. Parameter values should also be coded consistently. Consider these syntactically correct macro calls: ```plaintext %createSpec(datasets=ae cm, output=aecmspec.pdf, sortBy=pos, msg=yes) %xpt(data=ae cm, msg=t, rpt=aeCMxpt) ``` In only two macros we have managed to lay waste to a good portion of usability. We specify a list of data sets with the DATASETS parameter in %createSpecs, but use DATA in %xpt. We specify an output file by OUTPUT in %createSpec, but RPT in %xpt. Notice, too, that we need to specify the full file name in %createSpec but not in %xpt, where, presumably, the macro adds .PDF to the RPT parameter value. Finally, we muddy the waters by having a variety of ways to say "yes": in %createSpec, MSG=yes, and in %xpt, MSG=t. Adopting a single set of parameter names and values across the system of macros improves usability. With some standards in place, some of which may require rewriting part of the %createSpec macro, we change the calls to: %createSpec(data=ae cm, output=aecmspec, sortBy=pos, msg=yes) %xpt(data=ae cm, msg=yes, rpt=aeCMxpt) ### 5. Use Consistent Program Structure Just as we saw benefits from consistency of keyword parameter usage and naming, we also see benefits from using similar structure for all macros in a library. Once the structure is specified and its implementation becomes standard practice, the programmers who maintain and enhance them can more easily locate relevant portions of the program. Adopting a single set of parameter names and values across the system of macros improves usability. With some standards in place, some of which may require rewriting part of the %createSpec macro, we change the calls to: %createSpec(data=ae cm, output=aecmspec, sortBy=pos, msg=yes) %xpt(data=ae cm, msg=yes, rpt=aeCMxpt) In general terms, a macro should consist of four sections: - **Initialization** (as described in the "Documentation" section, above) - **Header Documentation** - **Core processing.** This is the "meat" in the "sandwich" of program structure. Once the Initialization section is complete, the program says, in effect, "If I'm still executing it must mean that parameter values were valid and I can do the work that was described in the program header." Here, and throughout the macro, processing consists of original, macro-specific coding as well as calls to utility macros. If a condition arises during this processing that prevents continued execution, control branches to the Termination section, described next. - **Termination.** Regardless of success or failure, almost every invocation of the macro should end in a termination section. Anthropomorphizing again, this is where the macro says "I'm done, so now let me check to make sure I'm not leaving anything behind that wasn't what I promised in the header documentation." This checklist includes, but is not limited to: deleting temporary macro variables, files, and data sets; resetting options back to their original values; and writing messages to the SAS Log describing output items' names and values. The last section is vital. Without it, the macro has the potential to do its required tasks (create data sets, write a report, quote tokens in a list, etc.) and leave behind temporary data sets, global macro variables, system options and the like that were different than they were before the macro executed. Here, as in the practice of medicine, the principle is "first, do no harm." Altering the macro user's environment is not only harmful, it leaves the impression that the macro coding is sloppy and unprofessional. Cleanup is key. ### 6. Emphasize User, Program Communication It is appealing to think of a macro as a nicely behaved black box that receives input, processes according to the documentation, and creates the expected output. This usually turns out to be the macro's minimal set of actions. Effective long-term use of both utility and application-class macros requires messaging in different forms. Let's look at the two principal recipients of macro messaging, the user and other programs. #### 6.1 Communication with the user. A well-designed and well-written macro should provide the option to the user of being able to view messages. These include the macro executing at one or more check points, displaying parameters it received, displaying data set counts, macro variable values and other information useful to the user as well as the macro developer, in case debugging is required. Messaging can be toggled by a macro parameter and handled throughout the program using a coding technique shown below: ```sas %macro test(msg=yes, other parameters); %local prt; %if %upcase(&msg.) = NO %then %let prt = *; %&prt.put test-> Dataset contains &dsnCount. observations; data _null_; set rpt; %&prt.put dsn= vname= status= ``` run; This method is both simple and powerful: if parameter MSG resolves to YES, macro variable PRT is null. If it is NO, PRT becomes *. References to PRT throughout the program result in it becoming a comment statement or an executable PUT or %PUT statement. This enables us to turn the messaging on or off with a macro parameter, rather than manually changing all the affected statements. Some messages will be unconditional, and not subject to control by MSG-like parameters. Warning and error messages should always be displayed. Less essential, but still helpful, are messages about new functionality. These can be controlled via an automatic macro variable, as shown below: ```sas %if %sysfunc(juldate("&sysdate9."d)) < %sysfunc(juldate("01apr2008"d)) %then %do; %put; %put Version 4.00 effective March 1, 2008: %put; %put Changes from Version 3.x: %put list of changes goes here %put See documentation location for details; %put; %end; %end; ``` Say we moved the macro into a general-use/production area on March 1, 2008. This code fragment will automatically display messages about new functionality for a month. A final note about messages to the user. When programming them, take the time to make them informative. A series of well-crafted messages from the macro should tell the story of its execution, with text accompanying values. Which would you rather see in a SAS Log? ``` 2 dsns AE DM or ``` ``` After filtering, process 2 eligible data sets: 1 of 2: AE 2 of 2: DM ``` Here, as with other practices described above, spending a little extra time makes the macro output appear more professional, and raises the user's comfort level. ### 6.2 Communication with other programs A less visible but equally important form of communication takes place between programs in the form of return codes. These are entities, usually macro variables, that tell the calling program how the macro completed. By convention, success could be 0, incomplete execution -1, and so on. The important point here is that the calling program needs to know the range and meaning of the return codes. This, of course, could be documented in the macro's header comments. Suppose macro %obsCount had these lines in its documentation header: ``` Return Codes: 0 = Data set located, observations counted successfully 1 = Data set located, but could not count observations 2 = Data set could not be located 3 = Parameter errors or incorrect calling environment ``` Programs using %obsCount could use its return codes as shown below: ``` %obsCount(data=mast.rand, rc=randN) %if &randN. ^= 0 %then %do; %put Could not determine obs count in mast.rand. Execution terminating; %goto term; /* Jump to program termination section */ %end; ``` This and other techniques already shown require more programming by the macro programmer. Here, too, we have a better program: if something was amiss with dataset mast.rand, the calling program can fail gracefully rather than littering the Log with SAS-generated error messages. Handling return codes requires more programming, but results in a cleaner, more informative, and more professional-appearing Log. ### 7. Control Macro Variable Scope Recall the first scenario at the beginning of this paper. A utility used by a macro set variable I to 1. This disrupted execution of the calling program, which also had a macro variable named I. Besides demonstrating a breathtaking lack of variable-naming imagination, the example also highlights the need for awareness of macro variable scope. This is an important topic, worthy of attention in an entire paper. Here, however, we'll focus on a few Best Practices surrounding scope. First, be aware of global and local scopes: a macro variable defined in open code or explicitly, via a %global statement, has global scope; it is available to be read or written anywhere in the program. A locally scoped macro variable is defined in a macro and is available to all macros invoked by the macro. Let's look at an example of the dangers of uncontrolled scoping: ```sas %macro outer(list=); %let upper = %sysfunc(count(&list., %str( ))); %do i = 1 %to &upper.; %let token = %scan(&list., &i.); %print(data=&token.) %end; %mend; %macro print(data=); %let i = %index(&data., .); %if &i. = 0 %then %let data = work.&data..; ... other statements not shown ... proc print data=&data.; title "&data."; run; %mend; Variable I was defined in OUTER's symbol table and so was in %print and other subordinate macros' symbol tables. The change to I in %print meant that I's value was altered once execution returned to OUTER, making its execution unreliable at best. There are several practices that help avoid this variable collision. 7.1 Explicitly Identify Local Variables. Use %local statements to create copies of I in each macro: ```sas %macro outer(list=); %local i; %do i = 1 %to &upper.; %let token = %scan(&list., &i.); %print(data=&token.) %end; %mend; %macro print(data=); %local i; %let i = %index(&data., .); %if &i. = 0 %then %let data = work.&data..; ... other statements not shown ... %mend; ``` 7.2 Chose Unique Variable Names. Select a consistent prefix for each macro, using a value that will not be used by other macros in the system. Identify the prefix in the program's header comment. ```sas %macro outer(list=); %local OUTi; %do OUTi = 1 %to &upper.; %let token = %scan(&list., &OUTi.); %print(data=&token.) %end; %mend; %macro print(data=); %local PRI; %let PRI = %index(&data., .); %if &PRI. = 0 %then %let data = work.&data..; ... other statements not shown ... %mend; ``` Using %local statements and/or naming conventions ensures a variable's scope is limited to that macro. If a macro variable needs to be shared between macros, it should explicitly be declared as global. Ideally, all variables used in a macro will be identified in %global or %local statements, even if your knowledge of the macro symbol table hierarchy makes you feel this is unnecessary. 8. Implement Diagnostic and Debugging Code No matter how good you, the programmer, feel about yourself and your abilities, chances are your macro will break at some point. Stated more gently, chances are your macro will need attention to deal with unanticipated situations. Diagnostics and coding techniques associated with debugging are different than the communication issues dealt with in the earlier sections. We will discuss two of them here: debug parameters and revision codes. 8.1 Debug Parameters. Conceptually, this is not very different than the impact of MSG=YES discussed earlier. In this context, however, the only consumer of the information is the macro developer. A macro parameter can be used to display the amount of %PUT, PUT, and PROC statements that are executed. This is output over and above that produced by a "normal" execution of the macro, and provides information that the programmer may find useful when ferreting out anomalous (or nonexistent) output. The parameter can take on different values so that the larger the number, the greater the amount of output. This is shown in the following example (only debugging-related code is shown): ```latex %macro rpt(debug=0, other parameters); %if &debug. > 0 %then %do; %put Global macro variables at start of execution:; %put _global_; %end; %if &debug. >= 1 %then %do; proc freq data=_TMP_2; tables grp1-grp&n.; title "Grouping variables from transposed master data"; run; %end; %end; ``` The result is as effective as it is simple. DEBUG defaults to 0, which produces no extra output. If its value is greater than 0, we display all global macro variables. If it is 1 or greater, we also get a frequency distribution of variables that we know will be helpful diagnosing the problem. Just how much output is produced by which levels of DEBUG is a matter of experience, and this experience is usually gained during the initial development of the macro. As time goes on and the macro's features are enhanced, the amount and control of the diagnostic output can be easily altered. 8.2 Revision Codes. A useful utility or application will, over the course of its life, need maintenance and encourage enhancement. This, in turn, means that the original program will be altered. The usual way of doing this is a line in the header comment: ``` History: 2007/11/23 FCD Initial version in autocall library 2007/12/22 FCD Add handling for empty datasets ``` Presumably the initial version of the macro assumed the input data would always have a non-0 number of observations to process. In reality, there were situations where this was not the case. In the above example, rather than have the macro "die noisily," programmer FCD made the necessary changes just before leaving for a well-earned Christmas vacation. The question then becomes what, exactly, was changed in the program? If the modification was not sufficient or if similar changes need to be made in other macros, it would be nice to quickly locate the affected statement(s). Revision codes solves this problem. First, let's revisit the header comment: ``` History: 2007/11/23 FCD Initial version in autocall library 2007/12/22 FCD [U01] Add handling for empty datasets ``` We use the convention [Unn] ("update number nn") to identify the change. Then throughout the program we reference it: ``` %dsetCount(data=pass1, count=npl) %if &npl. < 1 %then %do; /* [U01] */ %put First filtered pass resulted in an empty data set; %put Execution terminating.; %goto term; %end; /* [U01] end */ ``` We mark off the portion of the program affected by the change with comments, adding an additional note that the %end statement was the last location where the change took place. This way, the programmer can easily locate all related changes simply by searching for the text string U01. It's not unlike Hansel and Gretel leaving a trail of bread crumbs in their walk through the woods so they can find their way home (perhaps not the best analogy, since birds ate the crumbs and the children became lost; however, the general point is sound). As with messaging and diagnostics, it requires additional programming. Indeed, in this case the extra effort does not even add any executable statements to the program. The improvement in maintainability is marked. All that is required is a standard to adhere to and a little discipline when making changes. 9. Use Built-In Macro Tools Rather than reinvent the wheel and possibly end up with a less-than-round result, you should take advantage of the macro tools that come with SAS. The brief listing that follows identifies language features that aid program development, debugging, and distribution: 9.1 Automatic Macro Variables. SAS automatically creates and maintains a host of automatic macro variables. Some of their functionality can be duplicated by DATA step or other coding. Others are unique to the macro language. In either case, it pays to know what is available. In the following example, a “macro unaware” solution needlessly uses a DATA step to create a macro variable, while the “macro aware” code simply accesses the SYSDATE9 variable. The amount of code is reduced and makes the program more readable. <table> <thead> <tr> <th>Unaware of Automatic Macro Variables</th> <th>Utilizes Macro Variables</th> </tr> </thead> <tbody> <tr> <td>data <em>null</em>;</td> <td>footnote &quot;Run date: &amp;sysdate9.&quot;</td> </tr> <tr> <td>x = today();</td> <td></td> </tr> <tr> <td>call symput('date', put(x, date9.));</td> <td></td> </tr> <tr> <td>run;</td> <td></td> </tr> <tr> <td>footnote &quot;Run date: &amp;date.&quot;</td> <td></td> </tr> </tbody> </table> The previous example showed how automatic variables can replace cumbersome coding. Some variables make an otherwise impossible or incredibly convoluted task straightforward. The variable SYSINDEX holds the number of macro executions that have been made so far during a SAS session. Normally, this is a “so what” item. Recall, however, the programmer workshop metaphor we used at the start of the paper. SYSINDEX is a prime example of the obscure, seldom-used tool that becomes a lifesaver. Consider macro %ISO, which requires an array of date and time variables to assemble an ISO 8601-compliant date-time variable. Since different date components might be used in several calls to %ISO in the same DATA step, we cannot use the same array name. Adding the SYSINDEX value to the array ensures that multiple %ISO calls will generate multiple, and unique, array names. ```sas %macro iso(date=, out=); … statements not shown … array dTParts&sysindex.(6); … statements not shown … %mend ``` data ae: set clin.ae; %iso(date=onset) %iso(date=term) run; The first %ISO call might generate array DTPARTS45, the second DTPARTS46. The automatic macro variable guarantees that the array names will be unique, and is a good example of the payoff from browsing the help files even if you think you’ve “seen everything.” 9.2 Autocall Libraries. Recall the design goals described earlier in this paper: we document the macro, provide clear messages to the user, and take other steps to make it as user-friendly as possible. The macro itself must be easily made accessible to the user. This is handled by the SAS autocall facility, as shown below: ```sas options sasautos=('path1', 'path2', sasautos) mautosource mrecall; ``` The effect is similar to paths for formats and ODS item stores: a library of tools is made available to the user with a single option statement. Programs using these options do not have to use %include or other cumbersome constructs. A cautionary aspect of autocall usage is knowing where a macro is coming from; SAS will use the first instance of a macro as it searches the autocall path. Using the above example, if %FREQS is defined in both path1 and path2, SAS will use the copy found first, in path1. The MAUTOLOCDISPLAY system option displays the source of macros as they are used. Like MPRINT and various other options, it can be toggled on and off via debugging parameters (see "Options," below). 9.3 Functions. Just as you should use automatic variables to reduce code volume and make programs more readable, so should you use macro functions. Functionality that is not addressed by native, autocalled macro functions is usually found in other Base SAS functions. These are made accessible via the %sysfunc and %qSysfunc functions. Given the relatively few limits on these functions' use within a macro, there should be relatively few instances of a DATA step being used solely to produce a macro variable. This is shown below: the DATA step, while clever, is unnecessary. The %sysfunc-aware code is more succinct, executes faster, and is easier to maintain. <table> <thead> <tr> <th>Unaware of %sysfunc</th> <th>Uses %sysfunc</th> </tr> </thead> <tbody> <tr> <td>data <em>null</em>;</td> <td>%if %sysfunc(exist(master.clin)) %then %do;</td> </tr> <tr> <td>call symput('found', 'n');</td> <td>… statements not shown …</td> </tr> <tr> <td>set master.clin;</td> <td>%end;</td> </tr> <tr> <td>call symput('found', 'y');</td> <td></td> </tr> </tbody> </table> The effect is similar to paths for formats and ODS item stores: a library of tools is made available to the user with a single option statement. Programs using these options do not have to use %include or other cumbersome constructs. A cautionary aspect of autocall usage is knowing where a macro is coming from; SAS will use the first instance of a macro as it searches the autocall path. Using the above example, if %FREQS is defined in both path1 and path2, SAS will use the copy found first, in path1. The MAUTOLOCDISPLAY system option displays the source of macros as they are used. Like MPRINT and various other options, it can be toggled on and off via debugging parameters (see "Options," below). 9.3 Functions. Just as you should use automatic variables to reduce code volume and make programs more readable, so should you use macro functions. Functionality that is not addressed by native, autocalled macro functions is usually found in other Base SAS functions. These are made accessible via the %sysfunc and %qSysfunc functions. Given the relatively few limits on these functions' use within a macro, there should be relatively few instances of a DATA step being used solely to produce a macro variable. This is shown below: the DATA step, while clever, is unnecessary. The %sysfunc-aware code is more succinct, executes faster, and is easier to maintain. <table> <thead> <tr> <th>Unaware of <code>%sysfunc</code></th> <th>Uses <code>%sysfunc</code></th> </tr> </thead> <tbody> <tr> <td>stop; run; %if &amp;found. = y %then %do; ... statements not shown ... %end;</td> <td></td> </tr> </tbody> </table> ### 9.4 Options As mentioned in the abstract, the macro language has tremendous flexibility as well as the potential to produce chaos if programmed incorrectly. Fortunately, the variety and number of macro-related system options increases with each release of the SAS System. You do not have to use `MPRINT`, `MAUTOLOCDISPLAY`, `MLOGIC`, and `MPRINTNEST` in every program, but you should at least be aware of their existence and purpose. Be aware that some of these options, while providing essential information for debugging, can clutter the SAS Log to the point of being unreadable. One way to control the volume of output is an extension of the debugging parameter technique shown in Section 8, above. Notice that we follow the standard good practice of preserving option values, setting them to meet the desired debugging level output, and then reverting them to their original values in the termination section. ```sas %macro rpt(debug=0, other parameters); %local opts; %if &debug. > 0 %then %do; %let opts = options %sysfunc(getoption(mprint)) %sysfunc(getoption(mautolocdisplay)) %str(;); options mprint mautolocdisplay; ... other DEBUG > 0 actions ... %end; /* termination section */ &opts. ``` ### 10. Build the Other Tools You Need There is, sadly, a dearth of macro tools built into the SAS System. Macro-related options abound, and many are helpful, but they can be verbose, cluttering the SAS Log to the point of becoming unreadable. Sometimes there are diagnostic tools that you would like but that simply do not exist. One of the hidden costs of having a robust library of macro utilities and applications surfaces when we look at home-grown tools that need to be developed. This section describes two such tools. One is presented in full, while the second is described only at a high, non-coding level. Both should give a feel for the kind of supplemental tools that are helpful and the programming effort involved. #### Tool 1: Variable Display During macro development and debugging it is often helpful to display a list of all global macro variables and their values. Any one who has used `%put _global_;` for this purpose is aware that it is both alluring in its simplicity and disappointing in its output. The values come out in an order that is, to put it kindly, not discernable to the naked eye. A simple macro, shown here without comments for the sake of space, follows: ```sas %macro printMacvars; %local _opts; %let _opts = %sysfunc(getoption(mprint)) %sysfunc(getoption(notes)); options nomprint nonotes; proc sql noprint; create table _macvars_ as select * from dictionary.macros where offset=0 and scope='GLOBAL' order by name; quit; %if &SQLobs. = 0 %then %do; %put AllMacVars-> No global macro variables matched search criteria; %goto bottom; %end; data _null_; set _macvars_ end=eof; file log notitles; if _n_ = 1 then put / 'Macro Variable' @34 'First 50 Characters' / 32*'=' +1 50*'=' ; put name $33. value $char50.; if eof then put 32*'=' +1 50*'='; ``` --- **Foundations and Fundamentals** NESUG 2010 The utility is simple and powerful, and introduces the use of SAS metadata (dictionary tables) as an adjunct to tool development. Let's look at the macro in action. If we defined two global macro variables, `global1` and `testmacvar`, a call to `%printMacvars` would produce the following output: <table> <thead> <tr> <th>Macro Variable</th> <th>First 50 Characters</th> </tr> </thead> <tbody> <tr> <td>GLOBAL1</td> <td>G1</td> </tr> <tr> <td>TESTMACVAR</td> <td>tmv</td> </tr> </tbody> </table> # of variables = 2 The display is easy to read and clearly labeled. The alphabetical ordering of the variables doesn't suggest a huge improvement over `%put _global_` in our simple test case, but it doesn't take much imagination to see how this presentation would be helpful when dozens of variables are involved. **Tool 2: Header Web Page.** Section 3 discussed the merits and importance of the header comment at some length. What was not addressed was the difficulty of locating the macros and having a quick way to read the headers. When all macros are in a single directory, and when there are relatively few of them, printing them or opening them (as read only!) in an editor is not too tedious. Consider, however, that a mature system of macros which includes utilities and a reasonable range of applications can easily approach 100 individual files. In this scenario, printouts and text editors are not a match for the complexity of the system. One approach is to exploit the similarity of the macros' structures. Suppose each macro header is a single comment, such as: ```sas /* xxx.sas ... header comments ... */ ``` We can exploit this consistency by writing a program that does the following: - Identifies all directories in the autocall library path - Reads each *.sas file in these directories. For each file, store the complete path name and write an HTML file with the program name. The HTML contains the first line in the source file up to and including the first line containing a */ (the end of the header comment). The HTML also contains a hyperlink to the source file. - Writes other HTML files so that the final product is a frame set with a navigation pane. The output from this tool is shown on page 16, below. **11. Adopt the Software Development Mindset** This last set of guidelines develops a best-case scenario. You have constructed a library of utility macros that are liberally used by macro-based applications. These applications are well-received by end users, creating demand for more options and, eventually, completely new applications. As a developer, you are also an end user of your utility macros. As such, you see the occasional need to add parameters to the macros and to build new utilities to meet the expanding demands of applications. Consciously or not, you have become a software developer. If your programs were entirely self-contained (no `%include` statements or macro references) and if you were the program's only user, then change is not an issue. You might make a backup copy of the program, then alter the program and use it. The change in functionality or method of invocation is irrelevant because it is immediately known to the program's entire user community (i.e., you). If a change did not work as planned you can simply tweak the program and rerun it until it meets your requirements. However, once programs are dependent on utilities and other users have developed expectations about an application's input and output, you have moved into an entirely different realm of programming. Some of the key coins of this realm are: separation of development and production areas, and validation. Each of these is a large topic. We present only a cursory look at them here, limiting ourselves to the macro development context, and are fully aware that these topics are controlled to varying degrees by standard operating procedures in most organizations. 11.1 Separation of Development and Production. Source code is typically stored in a directory structure that reflects the life cycle of the macro. A common organization is to have development, testing, and production directories. Program setup files or macros can take advantage of this structure by prepending development directories to the autocall path. In the following example, testDir1 and testDir2 contain macros that are undergoing development or revision. Invoking %progSetup with DEV=YES inserts these locations in front of the default (production) directories, and ensures us that program x.sas in the test / prepended directories will be used instead of x.sas found in any of the production directories. ```sas %macro progSetup(dev=no); %local autoPath; %let autoPath = 'prodDir1' 'prodDir2' 'prodDir3'; %if %upcase(&dev.) = YES %then %let autoPath = 'testDir1' 'testDir2' &autoPath;, options sasautos=(&autoPath. sasautos) other autocall options; There can, of course, be more elaborate directory structures. The underlying logic for the structure is the same, however: separate the production areas seen by the end users from the testing and development areas, and promote new or upgraded programs to production only after they have been thoroughly reviewed. 11.2 Validation. Any piece of code, macro or otherwise, must be validated before it can be moved to a production area. The resources devoted to the validation effort vary. Typically, the more complex or mission-critical the program, the more elaborate the validation process. When validating any macro, the validation checklist should do the following: - confirm there is adequate header documentation - verify macro output is limited to what was described in the header comment - examine naming and default values of keyword parameters, ensuring they match what is described in header comment - thoroughly test parameter error-checking - examine the program for appropriate exception / error handling - confirm that the termination section removes temporary variables, datasets, etc. and sets any re-set options to their original state. IMPLEMENTATION: AN EXTENDED EXAMPLE Let's close with an example of a small, complete application that demonstrates many of the guidelines presented in this paper. %attrDiff was born out of need – datasets in a library had to have identical attributes for like-named variables. Patient identifier USUBJID, for example, should always be character, length 15, and have the label “Subject Identifier”. The application reads the SAS dictionary table COLUMNS and compares attributes (type, length, label) of all variables in a library. Application output is a dataset containing the dataset name, variable name, and attributes of variables that are mismatched. Before looking at specifics, it is worthwhile to note that a good portion of the program is taken up by comments and blank space. There are about 150 lines, and only about half are program statements. The rest of the space is used by header comments and a sprinkling of comments intended to help any one trying to read the program. This narrative and blank space (blank line separators, indentation, alignment, etc.) is typical of a well-documented program, and makes it easy to understand and read. ```sas /* attrDiff Function: Identify conflicting attributes of like-named variables in a library For each parameter, describe content, permissible values, whether required or optional, and default value. Input: Parameters (not case-sensitive) LIB LIBNAME of library to examine. REQUIRED. No default. COMPARE Attributes to compare. Order does not matter. Specify any or all of these variable attributes: T - type S - length L - label The source for %attrDiff source as well as programs demonstrating its use are available at the author's web site: www.CodeCraftersInc.com 12 OPTIONAL. Default is tsl OUT One or two-level name of output dataset. Cannot be in same library as specified by LIB parameter. REQUIRED. No default. MSG Write messages to Log? YES or NO OPTIONAL. Default=YES Describe output dataset observation content, sort order, and conditions that prevent it from being created. Output: Dataset specified by OUT parameter. Sort order is name, dataSet. Variables: dataSet $ 32 Dataset name name $ 32 Variable name (upper-cased) type $ 32 Variable name (upper-cased) typeFlag 8 Differing TYPE (0 or 1) length 8 Length (if COMP contained s) lengthFlag 8 Differing LENGTH (0 or 1) label $255 Label (if COMP contained l) lablelFlag 8 Differing LABEL (0 or 1) The dataset is created with 0 observations if there are no attribute conflicts. The dataset is NOT created if there are parameter errors. OUT dataset cannot be located in the same library as LIB. If we say to run in open code, we should also test for it at the beginning of execution. Example: %attrDiff(lib=clinical, compare=ts, out=probs) Compare type and length for data sets in library CLINICAL. Write output to dataset WORK.PROBS Revision code notation is arbitrary. Just be consistent in referring to it throughout the program. History: 2007-10-08 JHA Initial program 2007-11-15 JHA [U01] Add OUT parameter Use keyword parameters. Ensure default values match what was described in the header comment. %macro attrDiff(lib=, compare=tsl, msg=yes, out=); /* [U01] */ Since we said run in open code in the header comment, we need to test it here. Branch to the last statement rather than the termination section. Knowledge of automatic macro variables makes open code test straightforward. %if &sysprocname. NE %then %do; %put attrDiff-> Must run in open code. Execution terminating.; %goto lastStmt; /* <<<< <<< << < <<<< <<< << < <<<< <<< << < */ %end; Begin initialization section /* ---------- Housekeeping and initial messa ges ---------- */ Take explicit control of variable scope. You can place the %LOCAL statement near the statements creating the variables. %local opts star; Save initial option values before resetting. %let opts = %sysfunc(getoption(mprint)) %sysfunc(getoption(notes)); options nomprint nonotes; %if &msg. = NO %then %let star = *; Begin writing messages to Log. %&star.put; %&star.put attrDiff-> Begin. Examine library [&lib.] compare [&compare.] create [&out.]; Standardization of values makes evaluation easier later on (code is less cluttered due to lack of %upcase function references). /* ---------- Upper case some parameters ----------- */ Create error flag OK. As we find problems, set OK to f and write a message. This lets us accumulate errors and report more than one problem at a time. /* Check for parameter errors ------- */ Take explicit control of variable scope. You can place the %LOCAL statement near the statements creating the variables. local ok outLib; if &lib. ^= %then %do; %let ok = f; %put attrDiff-> LIB cannot be null; %end; Use %sysfunc as much as possible to reduce code volume. %else %if %sysfunc(libref(&lib.)) ^= 0 %then %do; %let ok = f; %put attrDiff-> Input LIBNAME [&lib.] not found.; %end; Reference to revision code [U01] %if &out. ^= %then %do; /* [U01] */ %let ok = f; %put attrDiff-> OUT cannot be null; %end; %else %do; %if %index(&out., .) ^= 0 %then %do; %let outLIB = %upcase(%scan(&out., 1, .)); %else %let outLIB = WORK; %if &outLIB. ^= &lib. %then %do; %let ok = f; %put attrDiff-> OUT and LIB libraries cannot be identical; %end; %else %if %sysfunc(libref(&outLIB.)) ^= 0 %then %do; %let ok = f; %put attrDiff-> Output LIBNAME [&outLIB.] not found.; %end; %end; %if &compare. ^= %then %do; %let ok = f; %put attrDiff-> COMPARE cannot be null; %end; %else %if %sysfunc(verify(&compare., TSL)) > 0 %then %do; %let ok = f; %put attrDiff-> COMPARE can only contain T, S, or L; %end; This is a simple way to avoid bulky macro coding. The alternative would have been: %if &msg. ^= NO & &msg. ^= YES %then %do; The benefit of this technique grows as the number of comparisons increases. %if %sysfunc(indexW(NO YES, &msg.)) = 0 %then %do; %let ok = f; %put attrDiff-> MSG can only contain YES or NO. Found [&msg.]; %end; Branch to termination and print message if we found any error conditions. /* If anything was amiss, print a message and branch to bottom */ %if &ok. = f %then %do; %put attrDiff-> Execution terminating due to error(s) noted above; %put attrDiff-> Output dataset [&out.] will NOT be created; %put attrDiff-> Execution is forced to the termination section. This guarantees that any clean up that is required will, in fact, get done. We do not use %return or %abort! %goto bottom; /* <<<<<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< <<< >>>.. %if %index(&compare., L) %then %do; %let lf = label, (count(distinct label) > 1 | (count(distinct label) = 1 & sum(missing(label) > 0))) as labelFlag; %let sumOps = &sumOps., labelFlag; %end; /* --------- Build the dataset --------- */ If this were a longer, more complicated program, we might have a %put statement saying *Step 1: read COLUMNS table, collect variable attributes* Since the core processing is basically just a single step, this message is probably not necessary. proc sql noprint; Reference to revision code [U01] create table &out. /* [U01] */ as select &tf. &sf. &lf., upcase(name) as name, memname as dataSet from dictionary.columns where catt(libname, memType) = "&lib.DATA" group by name having sum(0 &sumOps.) > 0 order by name, dataSet ; %&star.put attrDiff-> &SQLobs. variables with mismatches.; quit; Code processing section is complete. Execution drops into termination section. %bottom: %&star.put attrDiff-> Done.; %&star.put; Only clean up required is setting some options to their original values. More complex macros might require deletion of temporary data sets, temporary global macro variables, etc. /* --------- Revert to orginal MPRINT and NOTES values --------- */ options &opts.; %lastStmt: %mend attrDiff; CLOSING COMMENTS As stated in the introduction, this is a brief review of a large topic. You may have differing opinions about how to approach some of the items that were discussed. Indeed, you may have a list of items that you think should have been discussed but were omitted. For now, however, simply consider the reasons for why the items were included, and how systems of programs would benefit from their use. And, of course, if you have questions or comments, contact the author: Frank@CodeCraftersInc.com REFERENCES There are many books, papers, and other resources that deal with the macro language. Here are some of the more useful web sites. SAS Online Documentation Find the macro documentation by following "Base SAS" → "SAS Macro Language: Reference" http://support.sas.com/onedoc/913/docMainpage.jsp Conference Proceedings Archives This site has thousands of conference papers. Use search terms such as "macro language", "macro design". http://www.lexjansen.com SAS Support This site has many sample programs, links to publications, and the same "knowledge base" used by the SAS tech support staff http://support.sas.com SAS Community A sort of virtual users group, containing blogs, forums, downloads. http://www.sascommunity.org/wiki/Main_Page SAS-L list Server Questions, answers, and opinions dating back to the 1980's. A high-volume and high-quality group. CodeCrafters, Inc. The author's web site, containing SAS-related papers and many other professional and personal links. ACKNOWLEDGEMENTS SAS and all other SAS Institute Inc. product or service names are registered trademarks or trademarks of SAS Institute Inc. in the USA and other countries. ® indicates USA registration. Other brand and product names are trademarks of their respective companies. Figure 1: Programmatically-Generated Documentation
{"Source-Url": "https://www.lexjansen.com/nesug/nesug10/ff/ff03.pdf", "len_cl100k_base": 14068, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 44957, "total-output-tokens": 14939, "length": "2e13", "weborganizer": {"__label__adult": 0.00023484230041503904, "__label__art_design": 0.0005059242248535156, "__label__crime_law": 0.0001398324966430664, "__label__education_jobs": 0.0009102821350097656, "__label__entertainment": 5.2809715270996094e-05, "__label__fashion_beauty": 9.447336196899414e-05, "__label__finance_business": 0.00024187564849853516, "__label__food_dining": 0.0002117156982421875, "__label__games": 0.0003719329833984375, "__label__hardware": 0.0004701614379882813, "__label__health": 0.0001468658447265625, "__label__history": 0.0001379251480102539, "__label__home_hobbies": 9.137392044067384e-05, "__label__industrial": 0.0002218484878540039, "__label__literature": 0.00014281272888183594, "__label__politics": 0.00012493133544921875, "__label__religion": 0.000247955322265625, "__label__science_tech": 0.0025959014892578125, "__label__social_life": 5.8591365814208984e-05, "__label__software": 0.006893157958984375, "__label__software_dev": 0.9853515625, "__label__sports_fitness": 0.00016057491302490234, "__label__transportation": 0.00022661685943603516, "__label__travel": 0.0001385211944580078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61655, 0.01404]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61655, 0.55123]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61655, 0.85356]], "google_gemma-3-12b-it_contains_pii": [[0, 3777, false], [3777, 8323, null], [8323, 13013, null], [13013, 16856, null], [16856, 21425, null], [21425, 24972, null], [24972, 27553, null], [27553, 31981, null], [31981, 38915, null], [38915, 42235, null], [42235, 46003, null], [46003, 49992, null], [49992, 52589, null], [52589, 58537, null], [58537, 61086, null], [61086, 61655, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3777, true], [3777, 8323, null], [8323, 13013, null], [13013, 16856, null], [16856, 21425, null], [21425, 24972, null], [24972, 27553, null], [27553, 31981, null], [31981, 38915, null], [38915, 42235, null], [42235, 46003, null], [46003, 49992, null], [49992, 52589, null], [52589, 58537, null], [58537, 61086, null], [61086, 61655, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61655, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61655, null]], "pdf_page_numbers": [[0, 3777, 1], [3777, 8323, 2], [8323, 13013, 3], [13013, 16856, 4], [16856, 21425, 5], [21425, 24972, 6], [24972, 27553, 7], [27553, 31981, 8], [31981, 38915, 9], [38915, 42235, 10], [42235, 46003, 11], [46003, 49992, 12], [49992, 52589, 13], [52589, 58537, 14], [58537, 61086, 15], [61086, 61655, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61655, 0.05776]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
1a0d525061e8de13355e9be7d2ea2e89d04606d2
RIFF: Reduced Instruction Footprint for Coverage-Guided Fuzzing Mingzhe Wang, Jie Liang, Chijin Zhou, and Yu Jiang, Tsinghua University; Rui Wang, Capital Normal University; Chengnian Sun, Waterloo University; Jiaguang Sun, Tsinghua University https://www.usenix.org/conference/atc21/presentation/wang-mingzhe Abstract Coverage-guided fuzzers use program coverage measurements to explore different program paths efficiently. The coverage pipeline consists of runtime collection and post-execution processing procedures. First, the target program executes instrumentation code to collect coverage information. Then the fuzzer performs an expensive analysis on the collected data, yet most program executions lead to no increases in coverage. Inefficient implementations of these steps significantly reduce the fuzzer’s overall throughput. In this paper, we propose RIFF, a highly efficient program coverage measurement mechanism to reduce fuzzing overhead. For the target program, RIFF moves computations originally done at runtime to instrumentation-time through static program analysis, thus reducing instrumentation code to a bare minimum. For the fuzzer, RIFF processes coverage with different levels of granularity and utilizes vector instructions to improve throughput. We implement RIFF in state-of-the-art fuzzers such as AFL and MOpt and evaluate its performance on real-world programs in Google’s FuzzBench and fuzzer-test-suite. The results show that RIFF improves coverage measurement efficiency of fuzzers by 23× and 6× during runtime collection and post-execution processing, respectively. As a result, the fuzzers complete 147% more executions, and use only 6.53 hours to reach the 24-hour coverage of baseline fuzzers on average. 1 Introduction Fuzzing is an automated testing technique that attempts to detect bugs and vulnerabilities in programs [1, 3, 9, 13, 14, 24, 27, 31, 35, 36]. Coverage-guided fuzzing improves bug-detection ability of fuzzers by leveraging program coverage measurements to guide fuzzing towards exploring new program states [4, 8, 20, 29, 40]. These fuzzers perform the following steps: ① the fuzzer selects an input from the corpus and performs mutation operations to generate new inputs; ② the fuzzer executes the target program with mutated inputs and collects coverage statistics of these runs; ③ the fuzzer saves the input to the corpus if it can trigger bugs or find new program states. With proper coverage guidance, fuzzers can improve their efficiency by prioritizing mutation on interesting inputs in the corpus and discarding inputs that do not reach any new program states. Generally speaking, the coverage pipeline of fuzzers consists of two stages: runtime coverage information collection and post-execution processing: first, the target program is instrumented with coverage collection code, which updates an array of counters to record the runtime execution trace; after the completion of an execution, the fuzzer processes the values in the array to check whether each execution reaches any new program states. An instrumented program executes many more instructions compared to a non-instrumented binary. Since fuzzers continuously execute random inputs, a slight slow-down can significantly impact overall fuzzing performance. We analyze the source of overhead using many microarchitectural performance counters. For the target program, fuzzers insert instrumentation code for coverage collection at each basic block. The collection code saves the current register context, loads the base address for the counter region, computes the counter index, updates the corresponding counter value, restores the context, and transfers control back to the program logic (see Figure 2). The code is executed frequently, and can contain dozens of instructions encoded in around a hundred bytes. Furthermore, modern processors use a multi-tier cache subsystem to reduce memory latency. Because the collection code updates the counter array, it adds many loads or stores to the instruction stream. These memory accesses stress the memory subsystem by competing with the program logic for instruction cache. The extra memory latency reduces the overall execution speed of programs. For the fuzzer itself, the instructions which process coverage do not uncover new states in most cases. While new program states are extremely rare, fuzzers need to perform the following operations: convert the raw coverage information into features, then check the database of known features, and update the database to add the newly discovered features [42]. This algorithm is implemented using memory write-, integer comparison- and conditional branching instructions. The complex nature of the code prevents the compiler from optimizing it. Consequently, the instructions emitted by the compiler cannot fully utilize instruction-level parallelism supported by the processor’s execution engine. In this paper, we propose RIFF to reduce instruction footprints of coverage pipelines and improve fuzzing throughput. RIFF utilizes compiler analyses and leverages low-level features directly exposed by the processor’s instruction set architecture. Specifically, ① RIFF reduces the amount of instructions executed for runtime coverage collection in the target program. First, RIFF removes edge index computations at runtime by pre-computing the edge transfers at instrumentation-time. Next, RIFF eliminates the instructions for loading the base of the counter region by assigning the region a link-time determined static address. Thus, RIFF can use only one instruction encoded in 6 bytes per instrumentation site. ② RIFF removes unnecessary instructions when processing coverage in fuzzers by dividing processing granularity into three stages, where the first stage handles simple scenarios fast, while the last stage is suited for more sophisticated scenarios. For the most common case, RIFF scans the coverage region and skips zero-valued chunks using vector instructions, analyzing 16, 32, or 64 counters per iteration on modern processors. To demonstrate the effectiveness of our approach, we implement RIFF by augmenting state-of-the-art fuzzers such as AFL [40] and MOpt [29] and evaluate its performance on real-world programs from Google’s fuzzer-test-suite [21] and FuzzBench [30]. On the coverage collection side, RIFF reduces the average runtime overhead of instrumentation from 207% to 8%. On the post-execution processing side, RIFF reduces coverage processing time from 217 seconds to 42 seconds with AVX2 instructions [10] and 31 seconds with AVX512 instructions [34]. As a result, the enhanced fuzzers can complete 147% more executions during the 24-hour experiments, covering 13.13% more paths and 5.60% more edges. Alternatively, the improved fuzzers need only 6.53 hours to reach the 24-hour coverage of baseline fuzzers. In summary, this paper makes the following contributions: • We observe that the collection and processing of program coverage measurements significantly affect the speed of fuzzing. We break down the cost of instrumentation and analysis code. • We eliminate much of the runtime cost by using precomputing information statically, and we accelerate post-execution processing using vectorization. • We adapt RIFF to popular fuzzers and achieve significant speedup on real-world programs. The coverage analysis algorithm of our work has been integrated into production-level fuzzer AFL++ [23]. 2 Background 2.1 Stages of a Coverage Pipeline To guide fuzzing using coverage, fuzzers use a multi-stage pipeline. Figure 1 takes AFL as an example to demonstrate how fuzzers handle coverage: ![Figure 1: The coverage pipeline of the standard fuzzing tool AFL. After collecting the coverage from the target program (arrows labeled “instrument” and “update”), the fuzzer determines whether the input triggers new program behavior (“classify” and “detect”).](image-url) ① Instrument. At compile time, afl-clang allocates an array of 65,536 counters to store coverage as 8-bit counters. For each basic block of the target program, afl-as generates a random number ID as its identifier, then inserts a call to afl_maybe_log(ID) at the beginning. After instrumentation, the fuzzer generates random inputs and executes the program on each input. For each input the fuzzer detects whether the input triggers new program states by using a database, as follows: ② Update. At run time, afl_maybe_log updates the coverage counters to collect edge coverage. The logging function hashes the identifier of the previously executed and the current block, then uses the hash as an index into the counter array to increment the pointed counter by one. ③ Classify. After the target program completes execution, AFL reads the coverage counters to classify them into a bitmap of features. Each 8-bit counter with nonzero value is mapped to 8 possible features. The features are represented... as a bitmap, where each feature corresponds to one of the 8 bits inside the 8-bit counter. The classified result is written back to the coverage region. Detect. With the edge transfer counts classified as a bitmap, AFL scans the database of unknown program states to detect new program behaviors: if a previously-unknown edge transfer is triggered, then the input will be labeled as “new coverage”; if a known edge transfer has different features, then it will be marked as a “new path”; otherwise, the current input is discarded. After the scan, AFL removes the newly discovered features by updating the database. 2.2 Variants of Coverage Pipeline While the implementation varies for different fuzzers, the design mostly follows the classic coverage pipeline first introduced by AFL. Table 1 presents the instrumentation mechanism for popular fuzzers. Despite different tool chains and compiler infrastructures, all the collection methods insert code or callbacks to collect coverage. For example, although SanitizerCoverage contains a set of instrumentation options and is implemented in both Clang [7] and GCC [6], it uses callbacks and array updates to report coverage. Note that FuzzBench implements its own instrumentation for AFL [15], we only list it for completeness. <table> <thead> <tr> <th>Method</th> <th>Target</th> <th>Infrastructure</th> </tr> </thead> <tbody> <tr> <td>afl-{clang,gcc}</td> <td>Assembler</td> <td>N/A</td> </tr> <tr> <td>afl-clang-fast</td> <td>Clang</td> <td>LLVM Pass</td> </tr> <tr> <td>afl-fuzzbench</td> <td>Clang</td> <td>SanitizerCoverage</td> </tr> <tr> <td>libFuzzer</td> <td>Clang</td> <td>SanitizerCoverage</td> </tr> <tr> <td>honggfuzz</td> <td>Clang/GCC</td> <td>SanitizerCoverage</td> </tr> <tr> <td>Angora</td> <td>Clang</td> <td>LLVM Pass</td> </tr> </tbody> </table> Table 2 summarizes post-processing methods of coverage counters at fuzzers’ side. honggfuzz is a special case because it processes coverage in real-time. Other fuzzers first classify the counter array to a bitmap of features, then scan the bitmap to detect the presence of new features. <table> <thead> <tr> <th>Method</th> <th>Classify</th> <th>Scan</th> </tr> </thead> <tbody> <tr> <td>AFL</td> <td>Batch</td> <td>Bit twiddling</td> </tr> <tr> <td>libFuzzer</td> <td>Per Counter</td> <td>Statistics update</td> </tr> <tr> <td>honggfuzz</td> <td>N/A</td> <td>N/A</td> </tr> <tr> <td>Angora</td> <td>Distill</td> <td>Queued</td> </tr> </tbody> </table> For example, AFL implements a two-pass design. In the first pass, it performs bitmap conversion in batch; in the second pass, it applies bit twiddling hacks for acceleration. libFuzzer employs a one-pass design: for each non-zero byte, libFuzzer converts it to a feature index, then updates the local and global statistics with complex operations such as binary search. Angora takes the queued approach: in the first pass, it distills a small collection of counter index and feature mask out of the original array; in the second pass, it scans the collection to detect new coverage and pushes the modifications to the write-back queue; in the third pass, it locks the global database and applies the queued modifications. 3 Measuring Coverage Pipeline Overheads To measure the overhead of the coverage pipeline, we select the classic fuzzer AFL as an example: as the forerunner of coverage-guided fuzzing, most coverage-guided fuzzers partially or completely inherit its design. As for the target program and workload, we use libxml2 from FuzzBench. 3.1 Cost of Instrumentation To evaluate the overhead of coverage collection, we select all instrumentation methods provided by AFL, which cover all compiler infrastructures listed in Table 1. To have a fair comparison, we select afl-clang, afl-fuzzbench, and afl-clang-fast, because they have the same coverage update method and base compiler. We further decrease the optimization level of afl-fuzzbench to -O2 to match with the other instrumentation methods. We collect performance metrics by running the target program using perf tools. To remove the one-time cost of program startup, we do a warm-up run of the program with 1 input, then use 11 more inputs separately, then calculate the average of per-execution cost. The Intel Intrinsics Guide [12] is used as the XML input. Table 3 lists the overhead of each collection method by normalizing each metric to the non-instrumented baseline program. Looking at the “duration” column, we can see that the instrumentation significantly slows down program execution. For example, as soon as the fastest method, afl-clang-fast, finishes executing its first input, the non-instrumented program has executed more than half of the second input. <table> <thead> <tr> <th>Method</th> <th>Duration</th> <th>Instructions</th> <th>L1-I</th> <th>L1-D</th> <th>µops</th> </tr> </thead> <tbody> <tr> <td>afl-clang</td> <td>3.50x</td> <td>4.26x</td> <td>102.36x</td> <td>5.16x</td> <td>4.72x</td> </tr> <tr> <td>afl-fuzzbench</td> <td>2.45x</td> <td>2.83x</td> <td>19.88x</td> <td>2.53x</td> <td>2.14x</td> </tr> <tr> <td>afl-clang-fast</td> <td>1.69x</td> <td>1.79x</td> <td>33.58x</td> <td>2.88x</td> <td>2.11x</td> </tr> </tbody> </table> The “instruction count” column explains the slowdown. Figure 2 lists the instructions of afl-clang (the slowest method), and afl-clang-fast (the fastest method). Take afl-clang for example, for each basic block, it inserts 10 instructions encoded in 56 bytes. These instructions save the context, invoke __afl_maybe_log, and restore the context. Figure 2: Instructions inserted by afl-clang (22 instructions, 100 bytes) and afl-clang-fast (7 instructions, 39 bytes). Note that only the instruction marked in red updates the counter. The instrumentation code has a significant processor cost. First, it starves the processor’s front end which translates instructions to micro-ops. For each basic block, afl-clang requires executing extra instructions totaling 100 bytes, i.e. 1/256 of all the available L1 instruction cache. As a result, afl-clang experiences 101.36x more L1 instruction cache misses, and the CPU executes 3.72x more micro-ops for afl-clang produced programs. ### 3.2 Unnecessary Instructions in Fuzzer Figure 3 presents the cost breakdown of afl-fuzz by sampling its CPU usage. To reduce noise introduced by fuzzing, we sample the user space CPU cycles for 5 seconds after afl-fuzz has discovered 2,000 paths of libxml2. From the figure we can see that afl-fuzz spends the majority of its time on the coverage pipeline. To detect new program states, AFL spends 84.48% of its valuable CPU time on the coverage pipeline. The overwhelmingly high percent of CPU usage implies significant problems behind the overall system design, which inevitably leads to redundancy in executed instructions. Table 4 shows that most executions do not improve coverage. We call a counter “useless” if its value is zero, since a zero-valued counter never maps to a feature. We call a program execution “useless” if its bitmap does not contain any new feature with respect to previous executions. After AFL terminates, we collected the coverage of the first discovered 2,000 paths, and calculated useless counters (see the first row). We also compute useless executions during a 5-second time interval (see the second row). During the period, AFL had executed 67,696 inputs, where each execution required processing at least 6.4 KiB of coverage. Although it had processed over 4,231 MiB of coverage, it only discovered 2 new paths, and none of the paths covered new counters. <table> <thead> <tr> <th>Table 4: Number of Processed Counters and Executions</th> </tr> </thead> <tbody> <tr> <td><strong>Total</strong></td> </tr> <tr> <td>Counter</td> </tr> <tr> <td>Execution</td> </tr> </tbody> </table> As the first row in Table 4 shows, for the coverage of the first 2,000 discovered paths, 98.67% of the processed counters were zero. In other words, executing an input only covers 871.69 counters, yet the total number of counters allocated by AFL was 65,536. Angora’s instrumentation technique suffers even more, because it allocates 1MiB of memory to store coverage. The sparsity of the coverage array implies that skipping zero counters quickly during coverage analysis can be a major performance boost. As the second row in Table 4 shows, although 99.997% of the inputs did not trigger any new program behavior, AFL still performed many computations: the first pass converted the coverage to a bitmap, and the second pass re-read it to compare with the database of unknown program states. The same applies to libFuzzer, which maintains even more statistics, including the minimum input size, the trigger frequency of each feature, the list of the rarest features, and the list of unique features covered by the current input. The analysis requires complex computations involving table lookups, linear searches, and floating-point logarithms. The analysis logic cannot be efficiently optimized by compilers. The high-level algorithm is scattered with side effects, control-flow transfers, and data dependencies. Due to the complexity of the analysis logic, the compiler cannot perform important optimizations such as hardware-assisted loop vectorization. Only shallow optimizations, such as loop unrolling, are performed. 4 Design of RIFF Figure 4 presents the overall design of RIFF. Similar to conventional coverage pipelines, it consists of compile-time instrumentation, runtime execution trace collection, and coverage processing. At compile-time, RIFF performs control-flow analysis and interprocedural analysis to pre-compute all possible control-flow edges; each edge is statically allocated a fixed counter index. The compile-time computation avoids performing the address computation at runtime. Next, RIFF inserts code to log the edge execution by incrementing the counter at the corresponding address. Finally, RIFF generates machine code with the help of the compiler’s backend, without requiring runtime context saving or restoring. When the target program starts, RIFF’s runtime maps the coverage counters at the fixed address specified by the compiler. The simplified instrumentation and aggregated coverage layout reduces the overhead of coverage collection. We describe the optimized instrumentation in detail in Section 4.1. After the target program completes the execution of an input, the fuzzer enhanced with RIFF processes the coverage in three stages, using vectorization on the hot-path. 4.1 Single-Instruction Instrumentation As shown in Figure 2, the instrumentation code that collects coverage is expensive. Not only does it need to update the counter for each basic block, but the instrumentation code saves and restores registers around each counter update to preserve program logic. Moreover, the code loads the counter base address dynamically and computes the counter index by hashing the block index. RIFF reduces this code to a single instruction by performing much of this computation at compile time. Pre-Compute Counter Index AFL uses hashing-based control-flow edge coverage. While edge-level coverage can distinguish between execution traces where block-based coverage cannot, maintaining the previous block’s identifier dynamically and computing hashes at runtime is expensive. Using the compiler infrastructure RIFF performs edge-coverage computation at compile-time, and falls back to runtime computation only if static information is insufficient. Figure 5 illustrates the imprecision of block-level coverage. Figure (a) shows two control-flow graphs that have different edge counts but identical block counts. In theory, for a digraph with |V| vertices, there can be \(\frac{|V|(|V|-1)}{2}\) edges. Therefore, block count alone cannot determine the exact edge counts. However, in practice, the graph is very sparse, and in some cases, the edge counts can be uniquely determined by the block counts. However, calculating edge counts requires an expensive computation to solve a system of equations. As Figure (b) shows, there are three basic blocks (A, B, and C) and three edges (AB, BC, and AC). Suppose that the instrumentation scheme collects the count for basic block A, B, and C as a, b, and c respectively. While the hit count for edge AB and BC can be directly represented as a or c, the hit count of edge AC must be computed (such as a – b). Solving the system of equations will significantly slow down the processing at fuzzer’s side. RIFF leverages static analysis to allocate one counter for each edge. It does this by creating additional empty basic blocks when needed. As Algorithm 1 shows, if the hit count of an edge can be uniquely determined by its source or sink. Algorithm 1: Control-Flow Edge Instrumentation Data: A control-flow graph \( G = (V, E) \) Result: A new control-flow graph \( G' = (V', E') \) and a set of target blocks to instrument \( T \subseteq 2^V \) 1. \( T \leftarrow \emptyset \) 2. \( V' \leftarrow V, E' \leftarrow E \) 3. for \((x, y) \in E\) do 4. if \( \delta^+(x) = 1 \) then 5. The source vertex has only one outgoing edge \((x, y)\), thus the hit count of \((x, y)\) equals to \(x\); 6. \( T \leftarrow T \cup \{x\} \) 7. else if \( \delta^-(y) = 1 \) then 8. The sink vertex has only one incoming edge \((x, y)\), thus the hit count of \((x, y)\) equals to \(y\); 9. \( T \leftarrow T \cup \{y\} \) 10. else 11. No direct representation is available; 12. Introduce a temporary vertex \( t_{(x,y)} \) to represent the hit count of \((x, y)\); 13. \( V' \leftarrow V' \cup \{t_{(x,y)}\} \) 14. \( E' \leftarrow E' \cup \{(x, y), (t_{(x,y)}, y)\} \) 15. \( T \leftarrow T \cup \{t_{(x,y)}\} \) 16. end 17. end vertex, then the block count is used for the edge. Otherwise, an empty block is allocated to represent the edge. For function calls, RIFF uses the hit count of the caller block to represent the hit count for the edge between the caller block and the callee’s entry block because the counts are equal. After collecting the blocks to instrument, RIFF assigns identifiers sequentially for each block and removes instrumentation sites whose hit counts can be represented by other counters. These identifiers are used as runtime indexes for the counters in the coverage array. Fix Counter Base AFL uses a block of shared memory for the counters. When the target program starts, the runtime library maps the shared memory into its address space and stores the base address as a global variable. While indirect addressing is flexible, computing the counter address dynamically for every basic block is inefficient. To remove extra accesses to the counter base, the address must be compile-time constants for each instrumentation site. Counter allocation with fixed addresses is done in two steps. At the beginning of each basic block, the instrumentation code should increment its associated counter. If the base address is fixed, and the index of the array is already allocated at compile time (see Section 4.1), the address of the counter can be also computed at compile time. We can then directly increment the counter pointed by the address, using e.g., \texttt{incb $ADDR}. However, as Table 5 shows, directly encoding the target address inside the instruction requires a 7-byte instruction (scale-index-base). RIFF uses a RIP-based addressing mode, requiring a 6-byte instruction. Moreover, the expensive register save/restore code is no longer needed. Table 5: Instruction Encoding of Addressing Modes <table> <thead> <tr> <th>Assembly</th> <th>Length</th> <th>Opcode</th> <th>ModRm</th> <th>SIB</th> <th>Disp</th> </tr> </thead> <tbody> <tr> <td>incb $ADDR</td> <td>7</td> <td>0xfe</td> <td>0x04</td> <td>0x25</td> <td></td> </tr> <tr> <td>incb $OFFSET($rip)</td> <td>6</td> <td>0xfe</td> <td>0x05</td> <td></td> <td></td> </tr> </tbody> </table> | | | | | | | Before the target program runs, the memory shared by the fuzzer should be correctly mapped to the address space of the target program. To prevent the static and dynamic linker from reusing the address for other symbols, RIFF fixes the binary’s image base to the address 0x80000 (8 MiB), and reserves the address range of 0x400000 to 0x800000 for the coverage. Indirect Control Transfers While single-instruction instrumentation is efficient, this solution cannot be used for indirect control transfers. These occur in the following instances: GNU C extensions that allow taking the address of switch labels [18], \texttt{setjmp} and \texttt{longjmp} [26], function pointers, and unwinds on C++ exceptions. RIFF uses interprocedural control-flow analysis to discover such cases and falls back to dynamic computation. If the start of an edge representing indirect control transfer is found (e.g., \texttt{setjmp}), RIFF stores the source block ID in thread-local storage before performing the transfer. At the target of an indirect control transfer (e.g., \texttt{longjmp}), RIFF loads the source block ID and computes the counter index by hashing. 4.2 Hot-Path Vectorized Analysis As Table 4 shows, among all coverage counters, only a small number of counters are updated by the target program; among all executions, inputs which demonstrate new program behavior are extremely rare. This observation implies that many computations performed by the fuzzer do not produce useful results. If the redundant computation is removed, the simplified logic can be accelerated using SIMD instructions (Advanced Vector Extensions on x86-64 and NEON on ARMv8). Figure 6 demonstrates how this multi-stage processing design simplifies the logic. Stage 0 is the simplest one, which just fetches 64 bytes chunks and discards all-zero chunks. Stage 1 is invoked with the nonzero positions encoded as a bitmap in registers then directly compared with the database for unknown program states. The counters are discarded if no new features are discovered. Only when it is determined that the current input triggers new program behaviors is the original analysis performed by AFL invoked, in Stage 2. While this stage requires complex computations, it is rarely invoked. \footnote{RIP is the instruction pointer.} Vectorized Scan Although coverage-guided fuzzing can discover lots of code during the whole fuzz session, the covered code for a single input is lower. Because most counters are not accessed by the target program, their values stay zero after execution. Filtering out these zero counters can remove further processing stages, but the filtering operation itself requires extra computation. To filter the zero counters efficiently, we use instructions that scan counters in parallel. Modern processors have vector processing abilities. AVX512 is a typical single-instruction-multiple-data-design proposed by Intel in 2013, and it is widely supported in modern server processors. Operating on 512-bit vectors, it can compare 64 lanes of 8-bit integers in parallel (vptestnmb). For example, on Skylake-X based processors, it completes such a scan in 4 clock cycles. By comparison, the scalar-based processing requires 64 testb instructions with a latency of 1 cycle each. The vectorized comparison encodes the comparison results inside a mask register. Each bit inside the mask register represents whether a lane inside the vector is zero. For example, if we treat 64 bytes of data chunk as 8 lanes of u64, then the result mask register contains 8 bits. If the least significant bit (0x1) is set in the mask, then the first (#0) lane is zero. Similarly, if the most significant bit (0x80) is set in the mask, then the last (#7) lane is zero. Consequently, we can skip the following tiers if all the 8 bits are set (0xff), indicating that all the lanes are zero. Masked Compare If a chunk contains non-zero bytes, it may represent a new program behavior. Therefore, vectorized scan cannot discard the chunk and should delegate the computation to the next stage, masked compare. In this stage, the coverage is classified then compared with the database to detect new program behavior. However, even for a non-zero chunk, it is very likely that most of the lanes are zero because of the sparsity of coverage. To remove unnecessary computation, the mask obtained from vectorized scan is used to sift the nonzero lanes: only when the mask indicates that a lane is non-zero, then the following classification is used. Otherwise, the zero lanes are discarded immediately. For each nonzero lane, the corresponding counters are read into a register. After classifying the raw counters into bitmap using table lookups and bitwise operations, they are directly compared with the database. In most cases, the comparison will not find a difference and the bitmap is discarded. We optimize for the scenario where the bitmap is discarded to avoid updates to both the bitmap and the database. Infrequent Update For inputs triggering interesting behavior, the processing of its coverage will reach stage 2. This stage is seldom invoked. This stage performs the original analysis performed by standard fuzzers: first, it classifies the original counters and writes the bitmap back to memory; second, it reads the bitmap, compares it with the database, and updates the database if needed. While scanning the bitmap, it checks for changed counters and declares the run to be a “new coverage” if any are found. Otherwise, the run has discovered a “new path”. 5 Implementation Because single-instruction instrumentation requires the precise counter address for each instrumentation point, instrumentation must be performed on the whole program, at link-time. The compiler part of RIFF is implemented on LLVM. Specifically, when compiling source files, RIFF instructs the compiler to produce LLVM bytecode instead of object files. These bytecode files are linked to the whole-program bytecode for analysis use. Next, RIFF performs instrumentation on the whole program leveraging the DominatorTreeAnalysis and BasicBlockUtils analyses, then generates machine code as a single object file. During code generation, LLVM prefers to generate 7-byte addb instructions over the 6-byte incb instructions, because the default configuration of LLVM is optimized for old or mobile-first processors, where incb is slower than addb. To force instruction selection to generate incb instructions, we fine-tune the LLVM target attribute by disabling the slow-incdec target feature. As in conventional linking, the single object file is linked with system libraries. After this step the compiler maps the symbol denoting the start of coverage counters at a fixed address (SHN_ABS in st_shndx). As Listing 1 shows, the generated machine code only requires 6 bytes for most cases. Only on indirect transfers does RIFF fall back to the runtime hashing. ``` # Single-instruction instrumentation incb $INDEX(%rip) # fe 05 ?? ?? ?? ?? # Rare case: indirect transfer (source) mov $PREV(%rip),%rcx # 48 8b 0d ?? ?? ?? ?? movl $BBID,%fs:(%rcx) # 64 c7 01 ?? ?? ?? ?? # Rare case: indirect transfer (destination) mov $PREV(%rip),%rcx # 48 8b 0d ?? ?? ?? ?? movslq %fs:(%rcx),%rax # 64 48 63 01 ?? ?? ?? ?? xor $BBID,%rax # 48 35 ?? ?? ?? ?? ?? incb $BASE(%rax) # fe 80 ?? ?? ?? ?? ?? ``` Listing 1: Assembly and machine code generated by RIFF. Because vectorized coverage processing relies on the hardware support of SIMD instructions, currently we implement two variants on x86-64. If AVX512 Doubleword and Quadword Instructions (AVX512DQ) are supported, then 8 lanes of 64-bit integers are processed as a chunk. If AVX2 is supported, then the 4 lanes of 64-bit integers are processed as a chunk. Otherwise, Stage 0 is skipped entirely, and Stage 1 is executed. We implement the algorithms via intrinsic functions to take advantage of the compiler-based register allocation optimization. 6 Evaluation To demonstrate how the reduced instruction footprint accelerates fuzzing, we evaluate the performance of RIFF on real-world settings. For target programs, we select every program included in both Google fuzzer-test-suite and FuzzBench. Carefully picked by Google, they encompass a comprehensive set of widely-used real-world programs. For fuzzers, we select the classic industrial fuzzer AFL and the recently published MOpt. We compile the programs with afl-clang using the default settings and compile RIFF’s version with our instrumentation pipeline. For both cases we use Clang 11.0 with the same configuration (e.g., optimization level). As for the fuzzers, the baseline versions are built from the git repositories without modification. We further apply RIFF’s hot-path acceleration patch to the baseline fuzzers. Note that RIFF and AFL use different instrumentation, we calibrate the raw metrics with fuzzer-test-suite’s coverage binary for fairness. All the coverage used in the following analysis is based on the calibrated data. We perform the experiments on Linux 5.8.10 with 128 GiB of RAM. The processor used is Intel Xeon Gold 6148. Its Skylake-Server microarchitecture allows acceleration with AVX2 and AVX512. 6.1 Overall Results Figure 7 compares the time required by RIFF to reach the same coverage as AFL and MOpt respectively running for 3h, 6h, 12h, and 24h. A bar below the red line indicates a speed-up for RIFF. The purple bars show the speedup of the long experiments run for 24 hours, where fuzzing tends to saturate (discovering few new paths). On average, to reach the final coverage of AFL and MOpt running for 24 hours, RIFF’s improved versions only require 6.23 and 6.82 hours respectively. For individual programs, the improvements are consistent: even for the worst programs (freetype2 for AFL and libjpeg for MOpt), RIFF still reached the final coverage 2.1 and 0.8 hours before the baseline versions. On average, RIFF accelerates ![Figure 7](image-url) the 24-hour fuzzing by 268.74%. The bars of 3h, 6h, and 12h show the speedup for shorter experiments. In such scenarios, saturation is less likely, and the randomness can lead to slowdowns (causing a different set of inputs to be explored). Here we can see that RIFF is still frequently performing best. For example, when fuzzing freetype2 with AFL, RIFF-based version requires 1.22 hours more to catch up with the baseline version, but its performance gradually improves as we extend the experiment, and it leads by 0.85, 1.56, and 2.10 hours at 6, 12, and 24h respectively. Figure 8 presents the overall results after 24-hour experiments. Inside the figure, the baseline metrics from AFL and MOpt are normalized to the red horizontal line 1.0, while the corresponding metrics from RIFF’s optimizations are drawn as bars. Higher bars indicate better performance. The “covered edges” graph from Figure 8 demonstrates the overall improvement brought by RIFF. On average, RIFF improves the coverage of AFL and MOpt by 4.96% and 6.25% respectively. The improvement is consistent for individual programs: among all the 28 experiments, RIFF is best for 27. Because RIFF accelerates both the fuzzer and the target program, more executions can be completed in less time. Despite the trend of saturation for the long 24-hour trials, RIFF still managed to cover rare edges requiring a large number of executions. The “total paths” graph from Figure 8 demonstrates that RIFF has comparably good feedback signal as the baseline versions. For most programs, RIFF improves the total number of discovered paths since it performs more execution: on average, RIFF improves the number of discovered paths by 10.79% and 15.48% over AFL and MOpt respectively. Although RIFF simplifies the computation of edge coverage, its ability in providing fuzzing signal is not reduced because of the compile-time analysis. Take re2 for example, both the baseline versions seem to discover more paths; however, paths only provide fuzz signals, thus more paths do not necessarily lead to more coverage. When the fuzz-oriented coverage is calibrated to fuzzer-test-suite’s canonical coverage, RIFF-based fuzzers discover more edges. The advantage of RIFF can be seen in the “total executions” graph. RIFF increases the number of fuzzing executions in the same amount of time to values ranging between 1.03% to 541.38%. While the randomness introduced by fuzzing algorithms can cause diminished coverage, the overall result confirms that RIFF improves the execution in general. The vastly increased number of executions can be attributed to the reduced overhead, in both the target program and the fuzzer’s side. ### 6.2 Simplified Coverage Collection Single-instruction instrumentation reduces the overhead of the instrumentation. To evaluate it fairly, we first fix a set of inputs, and we reuse the same inputs for all measurements for all fuzzers. For each program, we mix 1000 inputs discovered by all fuzzers; while executing the programs, we measure the time and normalize it against the non-instrumented version. Figure 10 shows the instrumentation overhead for both afl-clang and RIFF. The figure demonstrates that the widely used instrumentation scheme afl-clang imposes heavy overhead on all the programs. Compared to the non-instrumented programs, programs instrumented by afl-clang the average execution time increases by 206.83%. The reasons can be explained by Figure 9: it executes 340.63% more instructions, which translate to 338.47% more uops and require 242.97% more L1 instruction cache refills. RIFF reduces the footprint of instructions down to one instruction per site. On average, the coverage collection of RIFF only requires 8.40% more time to execute, while afl-clang requires 206.83% more time. In other words, RIFF reduces the overheads by 23 times. The improvement can be explained by the reduced instruction footprint: RIFF eliminates loads to counter base, shifts computation of counter index to compile-time, and removes the context saving or restoring code. ![Figure 8: Normalized performance metrics for RIFF-based fuzzers after 24 hours of fuzzing. X axis is programs, Y axis is the normalized performance metric (ratio between RIFF and standard fuzzer). Bars higher than 1 (red line) indicate better performance.](image-url) 6.3 Accelerated Coverage Processing Hot-path vectorization accelerates the coverage processing at fuzzer’s side. To cancel randomness from fuzzing loops and irrelevant speedups from the target programs, we extract the coverage processing routine as a library and evaluate it in isolation. As in Section 6.2, we fix a set of inputs, then run experiments with these inputs to collect the raw coverage counter arrays. However, because all the saved inputs are rare cases which lead to new coverage, just running coverage processing routine on the saved inputs one by one exaggerates the rate of discovery. Instead, we calculate the average number of executions to discover a new input during the whole fuzz session, and run coverage processing routine on the raw coverage repeatedly this many times on the first 50 inputs. We further calculate the total processing time required to discover the first 50 inputs; we present the normalized values in Figure 11. Figure 11 shows the benefits of hot-path vectorization. The processing time of AFL and MOpt is normalized to 1.0, shown as the red horizontal line. The bars show the processing time of RIFF. The bars for AVX2 and AVX512 of Figure 11 demonstrate RIFF’s improved efficiency in coverage processing. Leveraging AVX2, RIFF uses one instruction to compare 32 coverage counters in parallel; AVX512 further extends the parallelism to 64 counters per comparison. With hardware-assisted processing, the vectorized versions improve the efficiency of the original scalar-based pipeline by 4.64x and 6.01x respectively. 7 Discussion Currently, we only evaluated our work on x86-64 due to insufficient fuzzer support on other platforms. For example, AFL only provides experimental ARM support via QEMU. While the implementation is target-dependent, the general idea applies to all platforms: the minimal instrumentation logic can be implemented with just 4 instructions on ARMv8 or RISC-V systems; the vectorized coverage processing can use ARMv8 NEON ISA instead of AVX2 or AVX512. As for the applicability of our improvement, we only applied our work to the industrial fuzzer AFL and the academic work MOpt due to limited resources. While they use different fuzzing algorithms, the improvements brought by RIFF are similar (see Figure 7 and 8). Our work can be easily adapted... to more fuzzers. For example, developers of AFL++ [17] have adapted our work to their code base and conducted independent third-party evaluations with Google FuzzBench [30]. According to the result [5], our modification (labeled as “skim”) was the best-performing one among all the 10 variants. 8 Related Work 8.1 Vectorized Emulation Snapshot fuzzing [2, 16] tests the target program from partially executed system states. The program is broken into small pieces of code, and the execution of the code is emulated by the hypervisor. Because the emulation simplifies the logic to execute, multiple system states can be emulated simultaneously with vectorization. Rather than accelerating the emulation, RIFF is focused on coverage pipeline: first, RIFF’s single-instruction instrumentation combined with vectorization-based emulation and checkpointing accelerates the execution of target programs; RIFF’s hot-path optimization also accelerates fuzzer’s coverage analysis. 8.2 Enriching Semantics of Coverage Since the coverage quality is crucial for input prioritization, numerous approaches have been proposed by academia which bring more semantics to coverage. For example, VUzzer [33] stores call stack information to coverage, and Angora enhances coverage with search targets [11]. Sometimes, researchers introduce data-flow features to conventional control-flow-oriented coverage. For example, Steelix [28] stores branch comparison operators, Dowser [22] records branch constraints, GreyOne [19] imports constraint conformance to tune the evolution direction of fuzzing, and [37] traces memory accesses. While these techniques can help a fuzzer to choose better inputs, the complexity introduces heavy overhead and severely limits the execution speed. 8.3 Reducing Overhead of Coverage Not instrumenting the program eliminates overhead altogether. Researchers utilize debugger breakpoints to detect the first time a block has been covered with hardware support [32, 43]; in this scheme, only the first occurrence of a block has extra cost. However, the information of the number of times that a block has been covered is lost without any instrumentation; on the contrary, RIFF does not reduce the quality of feedback. Another idea is to reduce the number of instrumentation points [25]. However, the cost of each instrumentation point is still high because it still needs to maintain the edge information by hashing. RIFF simplifies instrumentation points to single instructions; it is not focused on reducing the amount of instrumentation points. 8.4 Reducing Overhead of Operating System Traditionally, fuzzing is targeted at utility programs where each execution requires `fork` a new process and then `execve` to the new binary. To remove the costly `execve`, AFL implements fork server mode [39]. To reduce the cost of `fork`, Xu et al. [38] designs a new system call `snapshot` to restore the execution state in-place. To further reduce the number of invocations of `fork`, AFL implements persistent mode [41], where a program runs continuously without restart. libFuzzer further eliminates other expensive system calls with in-process fuzzing: if the fuzz target is library, then the fuzzing is performed in-memory. With these operating system works, the major overhead introduced by context switches of system calls has been greatly reduced. Consequently, the cost of execution has become another prominent problem. RIFF reduces the cost by reducing the instruction footprint of the coverage pipeline. 9 Conclusion In this paper, we present RIFF to reduce the instruction footprint for fuzzing. We first observe that the coverage pipeline in fuzzing slows down the overall execution speed. We find that the heavy instruction footprint is the root cause: for target programs, the expensive instructions collect coverage inefficiently; for fuzzers, the unnecessary instructions cannot fully exploit the processor’s ability. We implement RIFF to reduce the instruction footprint and achieve a 268.74% speedup for the 24-hour experiments. RIFF is being integrated by popular fuzzers such as AFL and AFL++ for use in industry and has shown significant improvements over the state of the art. Acknowledgments We sincerely appreciate the shepherding from Mihai Budiu and Eric Schkufza. We would also like to thank the anonymous reviewers for their valuable comments and input to improve our paper. This research is sponsored in part by the NSFC Program (No. 62022046, U1911401, 61802223), National Key Research and Development Project (Grant No. 2019YFB1706200) and Ali-Tsinghua Database Testing Research Project (NO. 20212000070). References
{"Source-Url": "https://www.usenix.org/system/files/atc21-wang-mingzhe.pdf", "len_cl100k_base": 10217, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 47700, "total-output-tokens": 13893, "length": "2e13", "weborganizer": {"__label__adult": 0.00037384033203125, "__label__art_design": 0.0002758502960205078, "__label__crime_law": 0.0003628730773925781, "__label__education_jobs": 0.00024187564849853516, "__label__entertainment": 6.103515625e-05, "__label__fashion_beauty": 0.00015544891357421875, "__label__finance_business": 0.00018846988677978516, "__label__food_dining": 0.00028634071350097656, "__label__games": 0.0006928443908691406, "__label__hardware": 0.0017995834350585938, "__label__health": 0.00034809112548828125, "__label__history": 0.0002267360687255859, "__label__home_hobbies": 8.589029312133789e-05, "__label__industrial": 0.0004436969757080078, "__label__literature": 0.0001894235610961914, "__label__politics": 0.0002498626708984375, "__label__religion": 0.0004360675811767578, "__label__science_tech": 0.0254974365234375, "__label__social_life": 6.395578384399414e-05, "__label__software": 0.00823211669921875, "__label__software_dev": 0.958984375, "__label__sports_fitness": 0.0002751350402832031, "__label__transportation": 0.00044083595275878906, "__label__travel": 0.0001767873764038086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54725, 0.04072]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54725, 0.30136]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54725, 0.87037]], "google_gemma-3-12b-it_contains_pii": [[0, 310, false], [310, 4414, null], [4414, 8953, null], [8953, 14225, null], [14225, 18027, null], [18027, 21443, null], [21443, 26928, null], [26928, 31302, null], [31302, 34607, null], [34607, 38946, null], [38946, 41276, null], [41276, 46204, null], [46204, 50470, null], [50470, 54725, null]], "google_gemma-3-12b-it_is_public_document": [[0, 310, true], [310, 4414, null], [4414, 8953, null], [8953, 14225, null], [14225, 18027, null], [18027, 21443, null], [21443, 26928, null], [26928, 31302, null], [31302, 34607, null], [34607, 38946, null], [38946, 41276, null], [41276, 46204, null], [46204, 50470, null], [50470, 54725, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54725, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54725, null]], "pdf_page_numbers": [[0, 310, 1], [310, 4414, 2], [4414, 8953, 3], [8953, 14225, 4], [14225, 18027, 5], [18027, 21443, 6], [21443, 26928, 7], [26928, 31302, 8], [31302, 34607, 9], [34607, 38946, 10], [38946, 41276, 11], [41276, 46204, 12], [46204, 50470, 13], [50470, 54725, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54725, 0.12083]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
ab889fd53b7d31ed0512adbcf9f0d50ef684c444
Integrating Algorithmic Parameters into Benchmarking and Design Space Exploration in 3D Scene Understanding Citation for published version: Digital Object Identifier (DOI): 10.1145/2967938.2967963 Link: Link to publication record in Edinburgh Research Explorer Document Version: Peer reviewed version Published In: Parallel Architecture and Compilation Techniques (PACT), 2016 International Conference on General rights Copyright for the publications made accessible via the Edinburgh Research Explorer is retained by the author(s) and / or other copyright owners and it is a condition of accessing these publications that users recognise and abide by the legal requirements associated with these rights. Take down policy The University of Edinburgh has made every reasonable effort to ensure that Edinburgh Research Explorer content complies with UK legislation. If you believe that the public display of this file breaches copyright please contact openaccess@ed.ac.uk providing details, and we will remove access to the work immediately and investigate your claim. Integrating Algorithmic Parameters into Benchmarking and Design Space Exploration in 3D Scene Understanding Bruno Bodin bbodin@inf.ed.ac.uk Luigi Nardi l.nardi@imperial.ac.uk M. Zeeshan Zia zeeshan.zia@imperial.ac.uk Harry Wagstaff h.wagstaff@inf.ed.ac.uk Govind Sreekar Shenoy gsreekar@inf.ed.ac.uk Murali Emani emani1@llnl.gov John Mawer john.mawer@manchester.ac.uk Christos Kotselidis christos.kotselidis@manchester.ac.uk Andy Nisbet andy.nisbet@manchester.ac.uk Mikel Lujan mikel.lujan@manchester.ac.uk Björn Franke bfranke@inf.ed.ac.uk Paul H. J. Kelly p.kelly@imperial.ac.uk Michael O’Boyle mob@inf.ed.ac.uk ABSTRACT System designers typically use well-studied benchmarks to evaluate and improve new architectures and compilers. We design tomorrow’s systems based on yesterday’s applications. In this paper we investigate an emerging application, 3D scene understanding, likely to be significant in the mobile space in the near future. Until now, this application could only run in real-time on desktop GPUs. In this work, we examine how it can be mapped to power constrained embedded systems. Key to our approach is the idea of incremental co-design exploration, where optimization choices that concern the domain layer are incrementally explored together with low-level compiler and architecture choices. The goal of this exploration is to reduce execution time while minimizing power and meeting our quality of result objective. As the design space is too large to exhaustively evaluate, we use active learning based on a random forest predictor to find good designs. We show that our approach can, for the first time, achieve dense 3D mapping and tracking in the real-time range within a 1W power budget on a popular embedded device. This is a 4.8x execution time improvement and a 2.8x power reduction compared to the state-of-the-art. Keywords design space exploration; DSE; computer vision; SLAM; embedded systems 1. INTRODUCTION The computing landscape has changed dramatically over the last decade. We have witnessed the decline of desktops and the rise of mobile devices as computing platforms. At the system level, power constraints have caused a fundamental shift to parallel heterogeneous platforms which is particularly important in thermally limited embedded mobile devices. More recently, the well-known dark silicon challenge suggests that we will not be able to simultaneously power on all the cores (or transistors) on a device [17]. Heterogeneous multi-core systems have emerged as a promising solution to this problem. For example, the ARM big.LITTLE technology [9] puts both a high power, high performance core cluster, and a more efficient but less computationally powerful cluster, on the same die. Software can then partition between these cores, or switch off cores entirely, depending on requirements. By mapping different parts of an application onto the appropriate specialized hardware resource, we can use the available power budget in an optimal manner. For that reason, heterogeneous multi-processor system-on-chips (MPSoCs) have been widely adopted in mobile embedded systems. In order to design and program heterogeneous MPSoCs, a vertical approach is necessary. Deep knowledge of all levels of the stack, from compilers to the micro-architecture, is needed in order to optimally map the executed code onto such diverse hardware resources. Additionally, deep domain knowledge may be required to tune software parameters to meet multiple conflicting design goals. This paper shows how we can go beyond conventional benchmarking in computer systems research by exposing the algorithmic-level design space. Traditionally, system designers have evaluated new architecture and compiler features using well-studied and broadly accepted benchmarks such as SPEC2006 [22]. However, since such benchmarks represent a historical snapshot of applications, they are not representative of modern requirements. In contrast, to design tomorrow’s systems, we need to consider new emerging applications from diverse domains. In this paper we focus on one set of emerging applications that is becoming significant in the mobile space: real-time 3D scene understanding in computer vision. In particular, we investigate dense simultaneous localization and mapping (dense SLAM) algorithms which are extremely computationally demanding. One such dense SLAM algorithm is KinectFusion (KFusion) which estimates the pose of a depth camera whilst constructing a highly detailed 3D model of the environment. Since such applications are typically tuned for high-end desktops with high power budget, executing them on power-constrained embedded devices is very challenging and, therefore, represents a realistic future application use case. We use the SLAMBench benchmarking framework, which contains a KFusion implementation, as it allows us to capture the performance metrics used to drive our design space exploration. We explore the mapping of SLAM applications to power constrained heterogeneous platforms. The key element of our approach is the exploration of the mapping problem at multiple levels, vertically integrating the algorithmic domain and the implementation layers. Instead of ignoring levels of the computing stack, we perform co-design space exploration. In other words, we examine how algorithmic, compiler, and architecture configuration choices affect the performance of the underlying system. The rationale behind including the algorithmic parameters in the co-design space exploration is that although these algorithms are tuned for desktop systems, it is unlikely that the same configurations will be optimal in a mobile MPSoC setting. We define the performance in terms of power consumption (measured in Watts, lower is better), accuracy of the computation (measured in centimeters, lower is better), and runtime (measured as wall clock time per frame in seconds, lower is better). The runtime is sometimes also quantified by the number of frames processed in one second, i.e., frames per second (FPS), higher is better; the current Microsoft Kinect (or equivalent ASUS Xtion Pro) RGB-D sensor runs at 30 FPS, so 30 FPS is needed for real-time processing. These three metrics interact and are considered simultaneously for a holistic evaluation of the system. Since the co-design space can be extremely large, it is not feasible to try all possible configurations. Instead, we sample the domain space and automatically build a model that predicts the three performance metrics for a given configuration. Using this model, and a methodology from machine learning known as active learning, we predict a three dimensional performance Pareto curve that is then used to feed the lower level layers, driving the compiler and architecture parameter choices. By exploring the resulting Pareto curve we obtain a mapping to an embedded platform that results in a 6.6-fold speedup over the original mobile implementation. More precisely, this new configuration runs at nearly 40 FPS while maintaining an acceptable accuracy (under 5 cm localization error) and keeping power consumption under 2 Watts. The Pareto front contains many more configurations, allowing us to trade between runtime, power consumption, and accuracy, depending on our desired goals. For example, we can also find points which minimize power consumption (e.g., a configuration providing 11.92 FPS at 0.65W) or which optimize for execution time without exceeding a given power budget (29.09 FPS at less than 1W). This paper demonstrates that our co-design space exploration tailors future applications to future power-constrained systems. The contributions of this paper are as follows: - We perform a vertical co-design space exploration considering algorithmic, compiler, and hardware layers. - We show that domain-specific knowledge can be used to trade off multiple optimization goals at an algorithmic level, before considering low-level implementation choices. - We introduce an effective method to guide the optimization using multi-objective performance prediction based on random forest and active learning. - In order to explore the potential for this approach we evaluate our methodology on an emerging SLAM benchmarking framework which supports quantitative evaluation of solution accuracy, execution time and power consumption. We obtain a 6.6x best improvement in execution time or a 4.3x best reduction in power dissipation over an hand-tuned implementation by a SLAM domain expert. 2. BACKGROUND Simultaneous localization and mapping (SLAM) systems aim to perform real-time localization and mapping “simultaneously” from a sensor moving through an unknown environment. Localization typically estimates the location and pose of the sensor with respect to a map which is extended as the sensor explores the environment. Dense SLAM systems in particular map entire 3D surfaces, as opposed to non-dense (feature-based) systems where maps are represented at the level of sparse point landmarks. Dense SLAM systems enable a mobile robot to perform path planning and collision avoidance, or an augmented reality (AR) system to render physically plausible animations at appropriate locations in the scene. Recent advances in computer vision have led to the development of real-time algorithms for dense SLAM such as KFusion. Such algorithms estimate the pose of a depth camera while building a highly detailed 3D model of the environment (see Figure 1). In this work we use the SLAMBench benchmarking framework\cite{31} which enables evaluation of runtime, power consumption, and accuracy for KFusion. Figure 3 depicts the KFusion performance metric measurements for two different platforms, namely the NVIDIA Jetson TK1 featuring the Tegra K1 SoC and the ODROID-XU3 equipped with a Samsung Exynos 5422 SoC. For a mobile SLAM system to be usable, an implementation needs to provide real-time processing, i.e. a frame rate of 30 FPS for the common cameras, to consume less than 3W of power, which enables fan-less cooling, and to provide an absolute trajectory error (ATE) of at most 5 cm. The ATE is calculated as the mean difference between the real trajectory and the estimated trajectory of a camera produced by a SLAM implementation. Thus, smaller ATE implies less deviation from the real trajectory. We observe (Figure 3) that neither the NVIDIA Jetson TK1 nor the ODROID meet these requirements with the default configuration. While the speed of the TK1 implementation is close to 30 FPS it consumes significantly more power. The ODROID implementation meets the power constraint but its frame rate is too low. Note that both the platforms meet the accuracy constraint. The results on the ODROID platform after design space exploration (DSE) are also shown in Figure 3. The improved KFusion application now delivers FPS > 30 on the ODROID platform and, at the same time, consumes less power, thus, meeting both the performance and power constraints. Although the ATE has increased slightly, the design constraint is still satisfied. This example demonstrates that for complex applications there is a trade-off between different performance metrics that can be exploited through an intelligent design space exploration. 3. METHODOLOGY In this section we describe our approach, including a detailed explanation of the design space parameters, the objectives which we are targeting, and our incremental approach to exploring the design space. In Section 4 we go on to describe the search techniques we use to guide our exploration through the design space. ### 3.1 Experimental Setting In order to evaluate our design space exploration (DSE) we use the SLAMBench framework with the ICL-NUIM [21] [20] dataset, specifically the first 400 frames of living room trajectory 2. We halved the original sequence in order to reduce the overall execution time of the benchmark; this was done after careful consideration that the accuracy metric is still representative of the whole sequence. Usual approaches in performance optimization consider benchmark suites that are, in general, a set of small kernels extracted from real applications. A criticism to what can be learnt from a benchmark suite is that they may not well represent and capture the complex interaction of kernels in a real-world application. Our application is composed of more than 10 GPU-accelerated kernels. It presents the opportunity to tackle exploration of parameters at the algorithmic level that is not possible with conventional benchmark suites. During execution the following three performance metrics are collected: 1) computation time, 2) absolute trajectory error (ATE) of the frame sequence, and 3) power consumption. ### 3.2 Co-Design Space The possible values of the parameters taken into consideration for the co-design space exploration are summarized in Table 1. Here we look at three different spaces: algorithmic, compilation, and architecture. #### Algorithmic Space. In this paragraph we summarize the algorithmic parameters that mostly affect our performance metrics. In the case of the SLAMBench implementation of the KFusion algorithm, we have access to the listed parameters. An extensive explanation of these can be found in [33] [31]. - **Volume resolution**: The resolution of the scene being reconstructed. As an example, a 64x64x64 voxel grid captures less detail than a 256x256x256 voxel grid. - **µ distance**: The output volume of KFusion is defined as a truncated signed distance function (TSDF) [33]. Every volume element (voxel) of the volume contains the best likelihood distance to the nearest visible surface, up to a truncation distance denoted by the parameter µ, also referred as mu in the text. - **Pyramid level iterations**: The number of block averaging iterations to perform while building each level of the image pyramid. - **Compute size ratio**: The fractional depth image resolution used as input. As an example, a value of 8 means that the raw frame is resized to one-eighth resolution. - **Tracking rate**: The rate at which the KFusion algorithm attempts to perform localisation. A new localisation is performed after every tracking rate number of frames. - **ICP threshold**: The threshold for the iterative closest point (ICP) algorithm [11] used during the tracking phase. - **Integration rate**: As the output of KFusion is a volumetric representation of the recorded scene, it needs to be repeatedly expanded using new frames. A new frame is integrated after every integration rate number of frames. We observe that the algorithmic design space consists of roughly 1,800,000 points. Furthermore, the exploration of algorithmic parameters involves trade-offs between accuracy, runtime, and power consumption. #### Compiler Space. In order to explore this space, we first compile each SLAMBench OpenCL kernel to LLVM IR using the clang compiler, before performing the selected LLVM optimization passes listed below. We then use Axtor [30] to produce OpenCL code from the processed LLVM IR. The optimized kernels are then used in SLAMBench instead of the original ones. A large number of compilation parameters exist, we selected those listed in Table 1 which are detailed below. - **OpenCL Flags**: We have explored eight standard flags that enable or disable some OpenCL compiler optimizations. For completeness we list here the set of OpenCL flags used, see [6] for explanation: cl-single-precision-constant, cl-denoms-are-zero, cl-opt-disable, cl-mad-enable, cl-no-signed-zeros, cl-finite-math-only, cl-unsafe-math-optimizations, and cl-fast-relaxed-math. #### Architecture <table> <thead> <tr> <th>Parameters</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>Volume resolution</td> <td>64x64x64, 128x128x128, 256x256x256, 512x512x512</td> </tr> <tr> <td>µ distance</td> <td>0.025, 0.075, 0.1, 0.2</td> </tr> <tr> <td>Pyramid level iterations</td> <td>3, 5, 7, 9, 11</td> </tr> <tr> <td>Level 1</td> <td>3, 5, 7, 9, 11</td> </tr> <tr> <td>Level 2</td> <td>3, 5, 7, 9, 11</td> </tr> <tr> <td>Level 3</td> <td>1, 2, 4, 8</td> </tr> <tr> <td>Compute size ratio</td> <td>1, 3, 5, 7, 9</td> </tr> <tr> <td>Tracking rate</td> <td>0, 10^-4, 10^-5, 10^-6, 1</td> </tr> <tr> <td>Integration rate</td> <td>1, 5, 10, 20, 30</td> </tr> <tr> <td>OpenCL flags</td> <td>cl-mad-enable, cl-fast-relaxed-math, cl-single-precision-constant,</td> </tr> <tr> <td></td> <td>...</td> </tr> <tr> <td>LLVM flags</td> <td>cl-unsafe-math-optimizations, cl-fast-relaxed-math, cl-opt-disable,</td> </tr> <tr> <td></td> <td>...</td> </tr> <tr> <td>Local work group size</td> <td>1, 2, 4, 8, 16, 32</td> </tr> <tr> <td>Vectorization</td> <td>1, 2, 4, 8, 16, 32</td> </tr> <tr> <td>Direction</td> <td>x, y</td> </tr> <tr> <td>Thread coarsening</td> <td>x, y</td> </tr> <tr> <td>Factor</td> <td>x, y</td> </tr> <tr> <td>Stride</td> <td>x, y</td> </tr> <tr> <td>Dimension</td> <td>x, y</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Parameters</th> <th>Values</th> </tr> </thead> <tbody> <tr> <td>GPU processor frequency</td> <td>177, 266, 350, 420, 480, 543, 600, DVFS 0.1, 2, 3, 4</td> </tr> <tr> <td>Number of active big cores</td> <td>1, 2, 3, 4</td> </tr> <tr> <td>Number of active little cores</td> <td>1, 2, 3, 4</td> </tr> </tbody> </table> Table 1: The three co-design exploration spaces and the parameters used. • **Vectorization**: Loop vectorization with various vector widths and directions on kernels that allow this optimization. • **Coarsening degree**: Thread coarsening is an advanced compiler optimization [29] which merges together multiple parallel threads, reducing the total number of threads instantiated. The factor parameter specifies how many threads have been merged. The stride parameter affects the threads’ mapping distribution enabling coalesced access patterns. The dimension parameter specifies the dimension affected by the merge. The compiler parameters affect both power and performance metrics. In addition, accuracy may be affected by some OpenCL flags which cause relaxed maths to be used, for example cl-fast-relaxed-math. **Architecture Space.** The architectural parameters exposed by each platform differ quite significantly. We considered two platforms the ASUS T200TA and the ODROID-XU3. In the case of the ASUS T200TA we are currently only able to select the CPU frequency governor, which in turn scales the CPU frequency and voltage. In this case we have access to only two governors: ‘powersave’ and ‘performance’, which set the CPU frequency to the lowest and highest available settings, respectively. In the case of the ODROID-XU3 platform, we have access to the parameters listed in Table 1: - CPU processor frequency: By default the GPU dynamic voltage and frequency scaling (DVFS) is active and the GPU dynamically adjusts to a particular frequency depending on the performance/power profile of the application. We disable the DVFS and set the GPU frequency to a specific value. - Number of active cores: The number of CPU cores that are active and running, these include the eight “big” (Cortex-A15) and “LITTLE” (Cortex-A7) cores (Section 5.1). By default all 8 cores are active, and we selectively switch off a number of cores. CPU DVFS is not available on this platform and, therefore, this dimension in the architecture space cannot be explored. The architectural parameters affect both the performance and the power metrics, but not the accuracy. Future approximate computing techniques which for example involve reducing the voltage of compute units, in order to trade off power against the chance of calculation errors, would produce a situation where architectural exploration would involve optimizing across all three targets (rather than just power and runtime). **3.3 Multi-Objective Optimization Goal** Figure 4 presents a fictitious example depicting samples (in green) over a 2-dimensional optimization space. In order to meet the runtime and accuracy thresholds (in dashed lines), the solutions of our exploration are confined to the bottom left region of the space, the targeted prediction area. For visualization purposes we are only showing two performance metrics, namely the error and the runtime. Figure 4: Illustrative example based on fictitious data. This is a two-objective optimization goal in the error and runtime performance metrics. The samples in green are spread all over the space. We are interested in the region highlighted by the black circle, namely the targeted prediction area. The Pareto front is represented in blue. (in black). In a multi-objective optimization, a single solution that minimizes all performance metrics simultaneously does not exist in general. Therefore, attention is paid to Pareto optimal solutions (in blue); which is, solutions that cannot be improved in any of the objectives without degrading at least one of the other objectives. We aim to find the configurations that are simultaneously in the targeted prediction area and on the Pareto front. **3.4 Incremental Co-Design Space Exploration** We tackle the co-design space exploration incrementally. We first apply the active learning regressor to the algorithmic parameters. The compiler transformations/optimizations are then applied to the Pareto optimal front points obtained. Since a general tool to drive the compiler exploration is not available for the set of vanilla and advanced compilation parameters that we aim to explore, the compiler space is a mixture of manual and exhaustive search. These are the best performing points of the algorithmic space and are used as an input to the compiler space. The architecture space is then exhaustively evaluated since the size of this space is relatively small (160 points). Our incremental approach enables us to refine the optimal solutions in different steps. **4. SMART SEARCH** The algorithmic parameter space we are investigating is too large to be exhaustively evaluated on the hardware platform. Thus we take the cheaper route of training a predictive machine learning model over a handful of examples (points in the parameter space) evaluated on hardware. We want to use this model to accurately predict the performance over the entire parameter space, while being many orders of magnitude faster as compared to running the application on hardware over a video sequence for millions of parameter settings. Unfortunately since we do not know the performance over the parameter space, we are also unaware of the points for which running a physical experiment will be most informative, in the sense of yielding the greatest increase in... the prediction accuracy of our model - a classic chicken and egg problem. Thus, we resort to bootstrapping predictive models (three separate randomized decision forests for accuracy, runtime, and power prediction) from a small number of randomly drawn samples in the parameter space. These models are then refined in subsequent iterations by drawing more samples from the parameter space (and retraining over the collective set); the new samples are now drawn to implicitly maximize the prediction accuracy near the respective Pareto optimal fronts. This strategy of letting the predictive model decide which samples will be most beneficial in increasing predictive accuracy over unseen regions of the parameter space is called active learning\[16, 41\]. Note that we explored a number of base predictive models including artificial neural networks, support vector machines, and nearest neighbors. Our experiments indicated that randomized decision forests outperform these methods, thus we stick to this class of models throughout this paper. This methodology is depicted in Figure 4 and explained in the next sections. 4.1 Randomized Decision Forest A decision tree is a tool widely used to formalize decision making processes across a variety of fields. A randomized decision tree is an analogous machine learning model, which “learns” how to classify (or regress) data points based on randomly selected attributes of a set of training examples. The combination of many weak regressors (binary decisions) allows approximating highly non-linear and multimodal functions with great accuracy. Randomized decision forest[12] combines many such decorrelated trees, based on the randomization at the level of training data points and attributes, to yield an even more effective supervised classification and regression model. A decision tree represents a recursive binary partitioning of the input space, and uses a simple decision (a one-dimensional decision threshold) at each non-leaf node that aims at maximizing an “information gain” function. Prediction is performed by “dropping” down the test data point from the root, and letting it traverse a path decided by the node decisions, until it reaches a leaf node. Each leaf node has a corresponding function value (or probability distribution on function values), adjusted according to training data, which is predicted as the function value for the test input. During training, randomization is injected into the procedure to reduce variance and avoid overfitting. This is achieved by training each individual tree on randomly selected subsets of the training samples (also called bagging), as well as by randomly selecting the deciding input variable for each tree node to decorrelate the trees. Figure 6 depicts a decision tree that performs classification over two input dimensions $X_1$ and $X_2$, and predicts a class from respective regions. A regression random forest is built from a set of such decision trees where the leaf nodes output the average of the training data labels, and the output of the whole forest is the average of the predicted results from the different trees. In our experimental setting, we train separate regressors to learn the mapping from our input (parameter) space to each output variable, i.e. the three performance metrics. 4.2 Active Learning Active learning is a paradigm in supervised machine learning which uses fewer training examples to achieve better prediction accuracy - by iteratively training a predictor, and using the predictor in each iteration to choose the training examples which will increase its accuracy the most. Thus the accuracy of the predictive model is incrementally improved by interleaving exploration and exploitation steps, as shown by the feedback loop in Figure 5. We initialize our base predictors (randomized decision forests) from a very small number of randomly sampled points in the parameter space. For these points the application is evaluated over a video sequence on the hardware platform, yielding accuracy, runtime, and power consumption corresponding to these points (labels in a supervised setting). Since our objective is to accurately estimate the points near the Pareto optimal front, we use the current predictor to provide performance values over the entire parameter space and thus estimate the Pareto fronts for accuracy, runtime, and power (separately) such as the one in Figure 4. For the next iteration, only parameter points near the predicted Pareto front are sampled (and evaluated on hardware), and subsequently used to train new predictors using the entire collection of training points from current and all previous iterations. This process is repeated over a number of iterations. Our experiments (Sect. 4.2) indicate that this smarter way of searching for highly informative parameter points in fact yields superior predictors as compared to a baseline that uses randomly sampled points alone. Thus by iterating this process several times in the active learning loop, we are able to discover high-quality de- 5. EXPERIMENTAL EVALUATION In this section we describe how we evaluated our novel co-design space exploration techniques. We begin by providing a more detailed description of the target platforms (Section 5.1). We then briefly summarize our key results (Section 5.2), before providing more detail on the results of each stage of our co-design space exploration in Sections 5.3, 5.4, and 5.5. 5.1 Platforms We use the popular Hardkernel ODROID-XU3 platform, based on the Samsung Exynos 5422, for all of our experiments (refer to Table 2). This board has been previously evaluated for use in UAV applications [21, 24], and is also used in the evaluation of SLAMbench [31]. We also considered the ASUS T200TA (refer to Table 2) for comparison during the algorithmic and compilation space. As mentioned earlier, this platform does not provide enough flexibility for a full exploration including the hardware space. The Exynos 5422 includes a Mali-T628-MP6 GPU alongside ARM’s big.LITTLE heterogeneous multiprocessing solution, consisting of four Cortex-A15 “big” performance tuned out-of-order processors, and four Cortex-A7 “LITTLE” energy tuned in-order processors. The Mali-T628-MP6 GPU consists of two separate OpenCL devices: one with four cores and another with two. In our experiments we only use the 4-core OpenCL which excludes partitioning tasks across multiple GPU devices. This is a potential avenue to explore in order to deliver even higher performance within a power budget. The ODROID-XU3 platform has integrated power monitors with on-board voltage/current sensors and split power rails. This allows independent power measurements for the “big” cores, “LITTLE” cores, GPU, and DRAM. The SLAMBench benchmarking framework provides natively an interface to access and log these power sensors. We also measure performance on an Intel Atom [3] platform in the form of an ASUS Transformer T200 tablet. This contains an Intel Atom Z3795 SoC, which includes a quad-core Intel Atom CPU running at up to 2.4 GHz. An Intel HD Graphics GPU is also present, containing 6 execution units and running at up to 778 MHz. We use the open source Beignet [4] OpenCL runtime which supports version 1.2 of the OpenCL standard and was produced by Intel’s Open Technology Center. 5.2 Overall Results We observe that the default configuration provides a frame-rate of 6 FPS for a power budget of 2.77 Watts. Our co-design space exploration results (refer to Table 3) show significantly better frame-rates with reduced power consumption and comparable accuracy. As an example consider a power budget of 1W. Our results show that a configuration exists in the real-time range (29.09 FPS) and with a similar accuracy ATE compared to the default configuration (4.47 cm). The selected best configurations perform well across datasets and in live mode using an actual RGB-D ASUS Xtion Pro camera. Active learning effectively and consistently pushes the Pareto front towards better solutions. Taking into account the domain layer of the stack unleashes unprecedented performance trade-offs compared to the more usual compiler optimizations. In fact our algorithmic design space exploration provides the greatest improvement on the performance metrics by a large factor (refer to Section 5.3). However, exploration on the hardware parameters shows that important speed/power trade-offs can be obtained in this space. In particular, as we shall see, the greatest improvement in power consumption is provided by exploration of hardware parameters. 5.3 Algorithmic Design Space Exploration The algorithmic space consists of application parameters summarized in Table 1 and described at the top of Section 3.2. As described in Section 4, we first sample this space at random, and then use active learning in order to push the Pareto front toward better solutions (refer to Figure 5.3). **Sampling.** We draw 3,000 uniformly distributed random samples from the parameter space and evaluate the KFusion pipeline on the video stream; for both platforms the cumulated runtimes take roughly 5 days. By using random sampling, we observe that the Pareto front cannot be improved beyond 2,000 samples. Thus, there is an inflection point beyond which random sampling is unproductive. **Active learning.** In order to further explore optimal points in the design space, we employ active learning in conjunction with random decision forest (Section 4.2). For the ODROID-XU3 this produces 1,142 new samples after 6 iterations, thus increasing the total number of samples to 4,142. Note that the <table> <thead> <tr> <th>Constraint</th> <th>Speed (FPS)</th> <th>Max ATE (cm)</th> <th>Power (Watts)</th> </tr> </thead> <tbody> <tr> <td>Default</td> <td>6.05</td> <td>4.41</td> <td>2.77</td> </tr> <tr> <td>Best runtime</td> <td>39.85</td> <td>4.47</td> <td>1.47</td> </tr> <tr> <td>Best accuracy</td> <td>1.51</td> <td>3.30</td> <td>2.38</td> </tr> <tr> <td>Best power</td> <td>11.92</td> <td>4.45</td> <td>0.65</td> </tr> <tr> <td>Power &lt; 1W</td> <td>29.09</td> <td>4.47</td> <td>0.98</td> </tr> <tr> <td>Power &lt; 2W</td> <td>39.85</td> <td>4.47</td> <td>1.47</td> </tr> <tr> <td>FPS &gt; 10</td> <td>11.92</td> <td>4.45</td> <td>0.65</td> </tr> <tr> <td>FPS &gt; 20</td> <td>28.87</td> <td>4.47</td> <td>0.91</td> </tr> <tr> <td>FPS &gt; 30</td> <td>32.38</td> <td>4.47</td> <td>1.01</td> </tr> </tbody> </table> Table 3: Best performance on the ODROID-XU3 platform, running KFusion under given constraints. we found 291 valid configurations during the sampling. Furthermore, by using the active learning technique, we observe 642 new possible configurations with an ATE of less than 5 cm on the ODROID-XU3 and 665 on the ASUS T200TA. This means we have produced twice as many valid points as random sampling, for roughly a third of the number of samples. These ratios are an indicator of the effectiveness of our active learning-based prediction model. There is a discrepancy between predicted and measured performance. This is shown by the active learning points in Figure that do not lie on the Pareto front. A performance comparison is also available on Table [4]. Figure 8: Impact of algorithmic parameters (x-axis) on the performance metrics (y-axis) for the ODROID-XU3 platform (the equivalent diagram for the ASUS T200TA is similar). Bigger squares indicate a higher correlation. A white square denotes a parameter which when increased improves the corresponding metric, whilst a black square shows a worsening. **Relationship between parameters and metrics.** It is particularly interesting to analyze the impact of each algorithmic parameter in isolation. To study the linear relationship of the algorithmic parameters with the performance metrics (frame-rate, accuracy and power) we use the Hinton diagram in Figure [5]. In this figure, each square denotes the correlation between a pair (parameter, metric), i.e. the linear relation between a parameter on the x-axis and the performance metric on the y-axis. Note that a bigger square denotes a stronger correlation and vice versa. Furthermore, the square color denotes if a parameter has a positive (white) or negative (black) correlation with the performance metric. For example, an increased compute size ratio improves power efficiency and frame-rate but degrades accuracy. This analysis provides a real applicability beyond our design space exploration. It gives an insight into linear relationships between algorithmic parameters and performance goals (frame-rate, accuracy, and power). As shown in the figure there are several parameters that have a non-linear relationship with the performance parameters. It would not be possible for a domain expert to understand these high-dimensional non-linearities, thus emphasizing the importance of automated analysis. For example, mu, integration rate, icp threshold, and the compute size ratio are the parameters with strong linear relationship in terms of accuracy, while only compute size ratio is strongly linear in execution time. **Effectiveness of the active learning method.** Figure [5] shows the overall improvement of the Pareto front obtained with active learning (in red) compared to the Pareto obtained with random sampling (in black). For the ODROID-XU3 we observe that random sampling provides a set of 333 valid configurations, i.e. 333 configurations with a max ATE smaller than 5 cm. For the ASUS T200TA, we found 291 valid configurations during the sampling. Fur- Table 4: Comparison of random sampling (RS) best solutions with active learning (AL) search over the algorithmic space for the ODROID-XU3 platform. We list the results for the default configuration given in the original SLAMBench paper [5], and our own results running the same configuration using a newer version of the SLAMBench package. For consistency, in this work our baseline version for performance comparison is the default measured version. The SLAMBench paper provides error as mean ATE over the whole workload, we provide it as the maximum ATE for any frame of the workload. Interpreting the results. We rank the different algorithmic parameters as a function of their influence on the performance metrics. This is shown as a decision tree (Figure 9) for the ODROID-XU3 platform. The salient advantage of the decision tree is that it can be readily understood. For the sake of presentation we only plot a few levels of the tree. We observe in our results that the Volume resolution is the algorithmic parameter that has the most significant impact on performance. Hence, it is at the root node with a decision threshold of 96. Note that for the Volume resolution, 96 is not a valid value but it can be seen as an intermediate of two valid values, i.e. 64 and 128. In addition, note that in Figure 5 the correlation between Volume resolution and the performance metrics is relatively small; this further highlights the highly non-linear nature of this parameter. The symbols represent a target performance goal achieved or not achieved, respectively without and with a cross. As we can see, there are two branches that contain configuration points satisfying the three performance metric thresholds depicted in the legend. These branches are Volume resolution < 96 and Compute size ratio ≥ 3, or Volume resolution ≥ 192 and 3 ≤ Compute size ratio ≥ 6. When in a branch we have a performance metric with a cross, that means that there are no configurations in all that sub-tree able to meet that performance metric requirement. By using the described techniques to explore the algorithmic space, we have obtained a 6.35x improvement in execution time (best speed), and a 23.5% reduction in power consumption (best power), compared to the default configuration on the ODROID-XU3 board. This means that, even without performing further exploration of the compiler and architectural spaces, we are already able to meet our design requirements. However, as will be seen, some runtime improvements and significant reductions in power consumption can still be obtained. 5.4 Compiler Space Table 1 summarizes the compiler parameters that are explored in our study. For this study we use the 36 Pareto optimal points of the algorithmic space to conduct a compiler design space study. In other words, we take the best performing configurations of the algorithmic space and use this sub-set of optimal points to further explore the compiler space, as explained in section 3.4. Optimizations. We consider compiler optimizations that only affect the kernels in isolation. Note that kernel transformations such as vectorization, thread coarsening, and OpenCL local workgroup sizes fall in this category. We optimize each kernel independently, which enables us to undertake an exhaustive exploration of the space of our selected optimizations for each kernel. For each vectorizable kernel, we look for every possible loop length vectorization (4 possibilities per axis for a total of 8) and select the best performing configuration. Furthermore, for each kernel we explore 72 different thread coarsening values. Note that a thread coarsening value is a combination of factor, stride, and dimension (see Table 1). To perform this transformation, we use an automatic source to source thread coarsening generation tool [29]. In addition, we also include several LLVM optimizations using the LLVM flags listed in Table 1. In Figure 10 as these optimizations mainly affect the runtime, we only show the impact of exploring the compiler design space on the runtime performance of each kernel of KFusion. These kernels correspond to those summarized in Figure 2 and numerical suffix values differentiate input dataset. We observe an average runtime speed-up of 23% with the ODROID-XU3, and 16% with the ASUS T200 and a maximum speed-up of 80% for the ‘halfSampleRobustIm- Figure 10: For each Pareto optimal solution in the algorithmic space, we apply some common and advanced compiler optimization techniques. The graph shows the average speed-up obtained for each kernel, for both the ODROID and ASUS platforms. ageKernel1’ kernel on ODROID-XU3. Other kernels achieving speedups are the ‘bilateralFilter’ (which performs image smoothing in the KFusion front end), and kernels associated with the reduction (‘reduce’) and tracking (‘track’) portions of the algorithm. However, we observe that these optimizations are effective only on less computationally intensive kernels, and hence they have little significant impact on the overall execution time of KFusion. Impact of the compilation parameters. We observe that the compiler parameters explored in our study provide only modest performance improvements. Specifically, we realize that real-time frame rate, i.e. 30 FPS, cannot be obtained by only applying the compiler optimizations. This highlights the need of co-exploring the algorithmic, compiler, and architecture design space. When running KFusion on the ODROID-XU3 with the optimized kernels, we observe an average of 6% performance improvement and a maximum of 20% performance improvement over the set of Pareto optimal points obtained from the algorithmic space exploration. 5.5 Architecture Space The last stage in our incremental co-design space exploration is exploring the architecture parameters in Table 1. Note that the architecture parameters only have an impact on the runtime and power, the ATE is not affected by this space exploration. This exploration will be performed across the Pareto optimal points obtained from the compiler design space stage under the constraint that they are accurate enough, i.e. ATE < 5 cm. This constraint is a reasonable assumption in most applications in the SLAM domain. Exhaustive exploration. Since the hardware design space size is only 160 configurations on the ODROID-XU3 platform, it can be exhaustively explored. We visualize the power consumption and runtime dimensions in Figure 11. The black cross depicts the default configuration and the black line is the Pareto front. We observe that a configuration exists that provides a frame-rate of 32.38 FPS (runtime 0.03 seconds) while drawing only 1.01 Watts of power. Thus this represents an interesting configuration that supports real-time performance and, at the same time, consuming minimal power. We further observe that there exists a configuration in the Pareto front that provides a frame-rate of nearly 40 FPS while consuming less than 2 Watts of power. There is also an extreme low-power configuration (0.65W) where we are trading power consumption for a lower frame rate. The improvements obtained from the architecture and compilation spaces can be seen in Table 5. Table 5: Compiler and architecture space exploration improvements. Our technique has been able to obtain significant improvements in power consumption, without compromising the execution time, and has also found configurations suitable for extremely power constrained environments. <table> <thead> <tr> <th>Constraint</th> <th>FPS</th> <th>Error</th> <th>Power</th> </tr> </thead> <tbody> <tr> <td>Best speed</td> <td>38.28</td> <td>4.47</td> <td>2.16</td> </tr> <tr> <td>Before</td> <td>39.85</td> <td>4.47</td> <td>1.47</td> </tr> <tr> <td>After</td> <td>38.07</td> <td>4.45</td> <td>0.65</td> </tr> <tr> <td>Best power</td> <td>11.92</td> <td>4.45</td> <td>0.65</td> </tr> </tbody> </table> 5.6 Discussion The majority of improvement in runtime came from optimizing at the algorithmic stage. By tuning the various parameters in the co-design space, we were able to achieve significant improvements in both execution time and power consumption (see Table 5). Our optimizations over the compilation and architectural spaces provided minimal improve- ments in runtime performance, meaning that if we had focused on only the 'lower' two layers of our design space, we would not have been able to reach our design goals with our selected platform. This shows the importance of incorporating domain knowledge into the design space. Although this multi-layered approach may not have converged on an optimal set of solutions, it reduced the size of the design space significantly and allowed us to find good configurations much more quickly. Importantly, modifying the algorithmic parameters significantly affects the runtime profile of KFusion. As the Hinton plot in Figure 5 shows, each parameter can have varying effects on runtime, accuracy, and power consumption. If we had focused on only the lower tiers of the optimization space, we would have missed this significant opportunity for improvements in runtime and power consumption, and instead obtained only minor improvements. 6. RELATED WORK The computer vision community primarily focuses on developing accurate algorithms, almost always running on high-performance and power hungry systems. As computer vision technology becomes mature, a few benchmarks have attempted to refocus research on runtime constrained contexts. Similarly, new challenges such as the Low-Power Image Recognition Challenge (LPIRC 2016) are emphasizing the importance of low-power embedded implementations of computer vision applications. In this context, recently SLAMBench enabled quantitative, comparable, and validateable experimental research in the form of a benchmark framework for dense 3D scene understanding on a wide range of devices. Adding energy consumption as a metric when evaluating computer vision applications, has enabled energy constrained systems such as battery-powered robots and embedded devices to become evaluation platforms. Zeeshan et al. is a first attempt at exploring SLAM configuration parameters trading off performance for accuracy on embedded systems. During the last two decades, several design space exploration techniques and frameworks have been used in a variety of different contexts ranging from embedded devices, to compiler research, and system integration. Kang et al. proposed a system which reduces the size of the design space by considering sets of design points to be equivalent. Hu et al. present a user-guided design space exploration framework, allowing the user to identify both good (and bad) design regions, and hence guide the subsequent search. Ansel et al. introduced an extensible and portable framework for empirical performance tuning. It runs an ensemble of search techniques systematically allocating larger budgets to those who perform well, using a multi-armed bandit optimal budget allocation strategy. Norbert et al. tackle the software configurability problem for binary and for both binary and numeric options using a performance-influence model which is based on linear regression. They optimize for execution time on several examples exploring algorithmic and compiler spaces in isolation. In particular, machine learning (ML) techniques have been recently employed in both architectural and compiler research. Khan et al. employed predictive modeling for cross-program design space exploration in multi-core systems. The techniques developed managed to explore a large design space of chip-multiprocessors running parallel applications with low prediction error. Similarly, Ipek et al. employed an artificial neural network to predict the impact on the performance of hardware parameters, e.g. cache sizes, buffer sizes, of a particular architecture. Furthermore, Lee et al. used polynomial regression to predict power and performance on a multiprocessor design space. Chen et al. suggest that a ML model can be used to produce a relative ranking of design points, rather than predicting their performance precisely. Regarding compiler optimization research, several efforts to apply ML in this field have been undertaken during the last decade. Cavazos et al. used ML to discover which sequence of compiler optimizations apply better to executed programs. Moreover, the research conducted in employ ML techniques for various compiler optimizations, e.g. loop unrolling, common subexpression elimination, loop hoisting, based on program features. In contrast to the aforementioned research, to the best of our knowledge, our work is the first to conduct a vertical co-design space exploration, taking into account algorithmic, compiler, and hardware layers in order to solve a three-objective optimization problem. Furthermore, to the best of our knowledge, we show that random forest in conjunction with active learning is effective to focus the search for Pareto optimal configurations in this context. 7. CONCLUSIONS AND FUTURE WORK We have considered an incremental co-design space exploration on a three-objective optimization goal, optimizing jointly on the runtime, power, and accuracy dimensions. Our incremental co-design is able to combine trade-offs at different levels in the system, refining the Pareto front in subsequent optimization stages. We have been demonstrating our methodology on a popular multi-kernel dense SLAM implementation. As a result, for the first time, this implementation runs in the real-time range on a device with a power budget of 1W. This is a 4.8x improvement in runtime and a 2.8x improvement in power consumption over an hand-tuned implementation by a SLAM domain expert on the same platform for a similar accuracy. This work goes beyond conventional benchmarking in computer systems research by exposing the algorithmic-level design space. In further work, we will explore how our approach generalizes to different applications, compilers and platforms. We will investigate variable selection methods that reduce the dimension of the space by creating a new feature space and by doing so will enable us to consider larger spaces for bigger mapping problems. There are also a large number of opportunities in transfer learning approaches. In particular, each configuration is likely to give a similar accuracy across a range of devices, and this knowledge might be used to guide exploration toward more interesting points from a power/runtime perspective. Alternatively, we might keep a fixed hardware, and use learned knowledge of the architectural space to more effectively search through design points for different applications running on the same hardware. 8. ACKNOWLEDGMENTS We acknowledge funding by the EPSRC grant PAMELA EP/K008730/1. M. Lujan is funded by a Royal Society University Research Fellowship. We thank the PAMELA Steering Group for the useful discussions. 9. REFERENCES 39-GHz [9] ARM Ltd. big.little technology.
{"Source-Url": "http://www.research.ed.ac.uk/portal/files/39862322/Bodin2016PACT.pdf", "len_cl100k_base": 11054, "olmocr-version": "0.1.50", "pdf-total-pages": 14, "total-fallback-pages": 0, "total-input-tokens": 44867, "total-output-tokens": 14242, "length": "2e13", "weborganizer": {"__label__adult": 0.0006694793701171875, "__label__art_design": 0.0013980865478515625, "__label__crime_law": 0.000583648681640625, "__label__education_jobs": 0.00103759765625, "__label__entertainment": 0.00019979476928710935, "__label__fashion_beauty": 0.00032520294189453125, "__label__finance_business": 0.0004317760467529297, "__label__food_dining": 0.0004394054412841797, "__label__games": 0.001453399658203125, "__label__hardware": 0.00832366943359375, "__label__health": 0.0009436607360839844, "__label__history": 0.0008382797241210938, "__label__home_hobbies": 0.00020062923431396484, "__label__industrial": 0.001232147216796875, "__label__literature": 0.0003859996795654297, "__label__politics": 0.0005536079406738281, "__label__religion": 0.0009765625, "__label__science_tech": 0.428955078125, "__label__social_life": 0.00011080503463745116, "__label__software": 0.0084075927734375, "__label__software_dev": 0.5400390625, "__label__sports_fitness": 0.0004925727844238281, "__label__transportation": 0.00142669677734375, "__label__travel": 0.0003604888916015625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61235, 0.04151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61235, 0.28347]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61235, 0.8647]], "google_gemma-3-12b-it_contains_pii": [[0, 1480, false], [1480, 5585, null], [5585, 11003, null], [11003, 13110, null], [13110, 20409, null], [20409, 25674, null], [25674, 30742, null], [30742, 36083, null], [36083, 39076, null], [39076, 43446, null], [43446, 47129, null], [47129, 53813, null], [53813, 59030, null], [59030, 61235, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1480, true], [1480, 5585, null], [5585, 11003, null], [11003, 13110, null], [13110, 20409, null], [20409, 25674, null], [25674, 30742, null], [30742, 36083, null], [36083, 39076, null], [39076, 43446, null], [43446, 47129, null], [47129, 53813, null], [53813, 59030, null], [59030, 61235, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61235, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61235, null]], "pdf_page_numbers": [[0, 1480, 1], [1480, 5585, 2], [5585, 11003, 3], [11003, 13110, 4], [13110, 20409, 5], [20409, 25674, 6], [25674, 30742, 7], [30742, 36083, 8], [36083, 39076, 9], [39076, 43446, 10], [43446, 47129, 11], [47129, 53813, 12], [53813, 59030, 13], [59030, 61235, 14]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61235, 0.17323]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
4a7747d0bfa73e96c6113fd54ace235a09ae67c4
Resilient Computing on ROS using Adaptive Fault Tolerance Michaël Lauer, Matthieu Amy, Jean-Charles Fabre, Matthieu Roy, William Excoffon, Miruna Stoicescu To cite this version: HAL Id: hal-01703968 https://hal.laas.fr/hal-01703968 Submitted on 16 Feb 2018 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Resilient Computing on ROS using Adaptive Fault Tolerance Michaël Lauer¹ *, Matthieu Amy, Jean-Charles Fabre², Matthieu Roy, William Excoffon and Miruna Stoicescu³ LAAS-CNRS, Université de Toulouse, CNRS, ¹ UPS, ² INP, Toulouse, France ³ EUMETSAT, Darmstadt, Germany SUMMARY Computer-based systems are now expected to evolve during their service life in order to cope with changes of various nature, ranging from evolution of user needs, e.g., additional features requested by users, to system configuration changes, e.g., modifications in available hardware resources. When considering resilient embedded systems that must comply with stringent dependability requirements, the challenge is even greater, as evolution must not impair dependability attributes. Maintaining dependability properties when facing changes is, indeed, the exact definition of resilient computing. In this paper, we consider the evolution of systems with respect to their dependability mechanisms, and show how such mechanisms can evolve with the system evolution, in the case of ROS, the Robot Operating System. We provide a synthesis of the concepts required for resilient computing using a component-based approach. We particularly emphasize the process and the techniques needed in order to implement an adaptation layer for fault tolerance mechanisms. In the light of this analysis, we address the implementation of Adaptive Fault Tolerance (AFT) on ROS (Robot Operating System) in two steps: firstly, we provide an architecture to implement fault tolerance mechanisms in ROS, and secondly, we describe the actual adaptation of fault tolerance mechanisms in ROS. Beyond the implementation details given in the paper, we draw the lessons learned from this work and discuss the limits of this run-time support to implement AFT features in embedded systems. KEY WORDS: Adaptive fault tolerance; ROS; Resilience 1. INTRODUCTION Evolution during service life is very frequent in many systems nowadays, including dependable systems. Such an evolution leads to modifications of the system software and hardware configuration. A challenge for the dependability community is to develop systems that remain dependable when facing changes (new threats, change in failures modes, application updates). The persistence of dependability when facing changes —defining the resilience of the system [1]— encompasses several aspects, among which evolvability is a key concept. Handling evolution involves new development processes, such as agile development methods, but also run-time supports that enable modifications at run-time. At run-time, dependability relies on fault-tolerant computing, i.e., a collection of Fault Tolerance Mechanisms (FTMs) attached to the application according to its criticality level. In this context, one of the key challenges of resilient computing is the capacity to adapt the FTM attached to an application during its operational life. In resilient systems, faults lead to failure modes that may violate dependability properties. The role of the safety analysis (e.g., using FTA, Fault Tree Analysis, or FMECA, Failure Modes, Effects and Criticality Analysis) is to identify the failure mode, the fault model and then define the safety mechanisms to prevent the violation of safety properties. Such safety mechanisms rely on basic error detection and recovery mechanisms, namely fault tolerance techniques, that are based on Fault Tolerance Design Patterns (FTDP) that can be combined together. During the operational life of the system, several situations may occur. For example, new threats may lead to revise the fault model (electromagnetic perturbations, obsolescence of hardware components, software aging, etc.). A revision of the fault model has of course an impact on the fault tolerance mechanisms. In other words, the validity of the fault tolerance mechanisms or the safety mechanisms depends on the representativeness of the fault model. In a certain sense, a bad identification of the fault model may lead first, to pay for useless mechanisms in normal operation and, second to observe a very low coverage of erroneous situations. This has an obvious side effect on the dependability measures (reliability, dependability). A change in the definition of the fault model often implies a change in the fault tolerance mechanisms. Beyond the fault model, there are other sources of changes. Resources changes may also impair some safety mechanisms that rely on hardware resources. A typical example is the loss of processing units, but simply a loss in network bandwidth may invalidate some fault tolerance mechanisms from a timing viewpoint. Application changes are more and more frequent during the operational lifetime of a system. This is obvious for conventional applications (e.g., mobile phones) but it is becoming also needed for more critical embedded systems. Today, it is the case for long living systems like space or avionics systems, but also in the automotive domain, not only for maintenance purposes but also for commercial reasons. The notion of versioning (updates) or the loading of additional features (upgrades) may lead to change the assumptions on top of which the implementation of FT mechanisms rely. Such change implies revisiting the FMECA spreadsheets but also the implementation of the FT mechanisms. Some FT mechanisms rely on strong assumptions on the lower level behavior, and the importance of assumptions coverage [2] is known for decades in the dependability community. Whatever the system’s evolution during its whole lifetime, the safety mechanisms must remain consistent with all assumptions and operational conditions in terms of fault model, resources availability and application characteristics. Thus, the FT mechanisms must be adapted accordingly, leading to the notion of Adaptive Fault Tolerance (AFT). Contributions: This work provides the following three contributions: i) we describe a concise synthesis of concepts required by any Adaptive Fault Tolerant system. This synthesis is oriented to derive the required support from the underlying operating system or middle-ware, ii) we propose an architectural model to implement generic and composable FT mechanisms on ROS, in a way that their integration is transparent to application, a prerequisite to their dynamic adaptation iii) we analyze in details to what extent the adaptation at run-time of FT mechanisms in ROS is feasible, and discuss the cost involved by this adaptation. In a first part, we summarize our approach to implement Adaptive Fault Tolerance, enabling partial updates of FTM to be carried out on-line. We take advantage of Component Based Software Engineering technologies for implementing the adaptation of fault tolerance mechanisms. The minimal run-time support for implementing adaptive fault tolerance must provide one-to-one mapping of components to run-time units, segregation between components, and dynamic binding between components. In the second part, we analyze to what extent AFT can be implemented on ROS. ROS is presently used in many applications (robotics applications, automotive applications like ADAS Advanced Driver Assistance Systems, or military applications). We show how ideal components can be mapped to ROS components and give implementation details of adaptive composable FTM at run-time. We finally draw the lessons learned from our first experiments that rely on a small case study to identify the limits of ROS as a run-time support for Adaptive Fault Tolerance. We discuss the limits of the exercise and identify some promising directions for future work. In Section 2 we describe the motivations and the problem statement. We give in Section 3 our definition and understanding of resilient computing. Our Component-Based Software Engineering (CBSE) approach for adaptive fault tolerance is summarized in Section 4. A full account of this approach can be found in [3]. The mapping of this approach to ROS is described in Section 5 and in Section 6, with the latter focusing on dynamic adaptation. The lessons learned are given in Section 7 before concluding. 2. MOTIVATIONS AND PROBLEM STATEMENT The need for Adaptive Fault Tolerance (AFT) rising from the dynamically changing fault tolerance requirements and from the inefficiency of allocating a fixed amount of resources to FTMs throughout the service life of a system was stated in [4]. AFT is gaining more importance with the increasing concern for lowering the amount of energy consumed by cyber-physical systems and the amount of heat they generate [5]. For Dependable systems that cannot be stopped for performing off-line adaptation, on-line adaptation of Fault Tolerance Mechanisms (FTMs) has attracted research efforts for some time now. However, most of the solutions [6, 7, 8] tackle adaptation in a preprogrammed manner: all FTMs necessary during the service life of the system must be known and deployed from the beginning and adaptation consists in choosing the appropriate execution branch or tuning some parameters, e.g., the number of replicas or the interval between state checkpoints. Nevertheless, predicting all events and threats that a system may encounter throughout its service life and making provisions for them is impossible. The use of FTMs in real operational conditions may lead to slight updates or unanticipated upgrades, e.g., compositions of FTMs that can tolerate a more complex fault model than initially expected. This explains why static system configurations with all possible FTMs and all possible combinations (FTMs composition) are not tractable. A form of differential FTM updates is proposed in this work to tackle unanticipated dependable systems evolution. In both aeronautical and automotive systems, the ability to perform remote changes for different purposes is essential: maintenance but also updates and upgrades of big embedded applications. The remote changes should be partial as it is unrealistic to reload completely a processing unit for small updates. This idea is recently promoted by some car manufacturers like Renault, BMW but also TESLA Motors in the USA stating in its website "Model S regularly receives over-the-air software updates that add new features and functionality". It is important to mention that performing remote changes will become very important for economic reasons, for instance selling options a posteriori since most of the evolution in the next future will rely on software for the same hardware configuration (sensors and actuators). Evolvability has long been a prerogative of the application business logic. A rich body of research exists in the field of software engineering consisting of concepts, tools, methodologies and best practices for designing and developing adaptive software [8]. Consequently, our approach for Adaptive Fault Tolerance leverages advancements in this field such as Component-Based Software Engineering [9], Service Component Architecture [10], and Aspect-Oriented Programming [11]. Functional and configuration changes may have a strong impact on dependability, and fault tolerance mechanisms must be updated to remain efficient in the presence of faults. To this aim, our basic idea is the following. Fault Tolerance or Safety Mechanisms are developed as a composition of elementary mechanisms, e.g., basic design patterns for fault tolerant computing. Using such concepts and technologies, we design FTMs as Lego-like brick-based assemblies that can be methodically modified at run-time through fine-grained changes affecting a limited number of bricks. This is the basic idea of our approach that maximizes reuse and flexibility, contrary to monolithic replacements of FTMs found in related work, e.g. [6, 7, 8]. However, most of software run-time supports used in embedded systems today do not rely on dynamic CBSE concepts. AUTOSAR, for instance, relies on very static system engineering concepts and does not provide today much flexibility [12]. A new approach enabling remote updates to be carried out, including for safety mechanisms, is required. To the best of our knowledge, componentization and dynamic configuration of fault tolerance mechanisms has not been addressed in previous works. ROS seems an appealing candidate for the dynamic composition of safety mechanisms. ROS is described as†: [...] an open-source, meta-operating system for your robot. It provides the services you would expect from an operating system, including hardware abstraction, low-level device control, implementation of commonly used functionality, message-passing between processes, and package management. It also provides tools and libraries for obtaining, building, writing, and running code across multiple computers. ROS can be viewed as a middleware running on top of a Unix-based operating system (typically Linux). ROS is used in robotics applications (e.g., Robonaut 2 from NASA within the International Space Station) but also in other industry sectors, the automotive industry for instance. This open-source middleware provides a weak component approach and means to dynamically manipulate the system configuration. 3. RESILIENT SYSTEM AND DESIGN PATTERNS 3.1. Basic principles and definitions A resilient system architecture is similar to a conventional dependable system architecture, but exhibits additional services, like an Adaptation Engine and a Monitoring Engine. Due to some changes in operation, an FTM may have to evolve and its development is carried out off-line. The Adaptation Engine objective is to update the implementation of the FTM on-line with necessary and sufficient only modifications to make it adequate. The Monitoring Engine controls that the running FTMs are consistent with their assumptions according to system state observation. Any inconsistency detected must trigger an adaptation of the FTM. Monitoring issues are out of the scope of the work reported in this paper. In our framework, an application component $C$ is attached (bounded) to an FTM (possibly a composition of several FTMs) following the well-known Separation of Concerns (SoC) principle. The Adaptation Engine is thus responsible for the management of the dynamic link between $C$ and FTM, but also between components within a composite FTM component. It keeps track of components assemblies for both the application and the FTM. Fault Tolerance Design Patterns represent solutions to a given fault tolerant computing problem. In Figure 1 we show an extract of FTDP classification with respect to fault models (F) and application characteristics (A). The fault model $F$ has to be considered in a first place, distinguishing hardware and software faults here. Regarding hardware faults, patterns can deal with crash faults, permanent and transient value faults. In a second step, we refine the selected pattern, a duplex strategy for our example in Figure 1, with application characteristics regarding determinism and state issues. Determinism of execution implies that identical input values lead to identical output results, a key point for active replication. State, if any, also involves the capability to capture the state, which is required for passive and checkpointing-based applications. A Fault Tolerance Mechanism (FTM) is then an implementation of a selected pattern. This classification is obviously very incomplete, but its merit is to show how to select a given FTM. We rely on such criteria, more precisely assumptions, to illustrate adaptive fault tolerant computing. In the remainder of this paper, we will consider FTMs dealing with hardware faults only (permanent or transient). We recognize that software faults are more difficult to handle, both for detection and recovery, and mainly depend on application semantics. In our case, FTM handling hardware faults are sufficient to perform our analysis targeting ROS as a run-time support for adaptive fault tolerant computing. †http://wiki.ros.org/ROS/Introduction 3.2. FTM Selection Criteria As soon as the fault model is determined, then several solutions can be investigated depending on the application characteristics that have an impact on the implementation and the validity of the FTM. Depending on determinism and state issues, one implementation of the FTDP is chosen, leading thus to a concrete FTM. The resource aspect comes last. Any FTM needs resources to execute. Among the several FTMs that satisfy F and A assumptions, the selection can be based on local or system wide criteria. An FTM can be chosen because it requires the smallest set of resources among the valid FTMs candidates, or a more complex algorithm can be run to check if more resources can be granted to an FTM in order to improve some other criteria like response time in normal operation, recovery time, etc. ![Fault Tolerance Diagram](image) Figure 1. Extract of FTDP classification The fault model can obviously be very much extended with more detailed types of faults including undesirable events identified in safety analysis. The mechanisms are identified according to the fault model, but their implementation depends very much on the application characteristics. The example given here shows the implication of state and determinism in the selection of a given implementation of duplex strategy. An extended definition of the fault model, including accidental physical faults both permanent and transient, programming faults, application undesirable events considered in safety analysis, may lead to the composition of several FTM. This issue is considered in this paper and illustrated in Section 5. The next Section focuses on describing the basic concepts that underly an adaptive fault tolerant system. 4. ADAPTIVE FAULT-TOLERANCE In this Section, we synthesize the essential concepts to address the problem of Adaptive Fault Tolerant computing. The extensive discussion is out of the scope of this paper and can be found in [3, 13, 14, 15]. 4.1. Basic concepts of AFT Three software development concepts are, in our view, essential for adaptive fault tolerance [13, 14]: - **Separation of Concerns**: this concept is now well known, it implies a clear separation between the functional code, i.e. the application, and the non-functional code, i.e. the fault tolerance... mechanisms in our case. The connection between the application code and the FTM must be clearly defined as specific connections. This means that the FTMs can be disconnected and replaced by a new one if provided the connectors remain the same. - **Componentization**: this concept means that any software components can be decomposed into smaller components. Each component exhibits interfaces (services provided) and receptacles (services required). This means that any FTMs can be decomposed into smaller pieces, and conversely that an FTM is the aggregation of smaller ones. The ability to manipulate the binding between components (off-line but also on-line) is of high interest for AFT. - **Design for adaptation**: the adaptation of software systems implies that i) the software itself has been analyzed with adaptation in mind for later evolution using componentization (although all situations cannot be anticipated), and ii) software systems have been designed to simplify adaptation including from a programming viewpoint (e.g., using object-oriented, aspect-oriented programming concepts). Such basic concepts have been established and validated through various steps of analysis of fault tolerance design patterns and after several design and implementation loops, as discussed in [3]. The main benefits of AFT with respect to pre-programmed adaptation is that it provides means to define and update dependability mechanisms later during the lifetime of the system. Pre-programmed adaptation implies that all possible undesirable situations are defined at design time, which is difficult to anticipate regarding new threats (attacks), new failure modes (obsolescence of components), or simply adverse situations that have been ignored or forgotten during the safety analysis. In short, fine grain adaptation of FTMs improves maintainability of the system from a non-functional viewpoint. 4.2. Change Model The choice of an appropriate fault tolerance mechanism (FTM) for a given application depends on the values of several parameters. We consider three classes of parameters: i) fault tolerance requirements (F); ii) application characteristics (A); iii) available resources (R). We denote (F, A, R) as change model. At any point in time, the FTM(s) attached to an application component must be consistent with the current values of (F, A, R). The three classes of parameters enable to discriminate FTMs. Among fault tolerance requirements F, we focus, for the time being, on the fault model that must be tolerated. Our fault model classification is based on well-known types [2], e.g., crash faults, value faults, development faults. In this work, we focus on hardware faults but the approach is perfectly reproducible for FTMs that target development faults. The application characteristics A that we identified as having an impact on the choice of an FTM is: application statefulness, state accessibility and determinism. We consider the FTMs are attached to a black-box application. This means there is no possibility to interfere with its internals, for tackling non-determinism, for instance, in case an FTM only works for deterministic applications. Resources R play an important part and represent the last step in the selection process. FTMs require resources such as bandwidth, CPU, battery life/energy. In case more than one solution exists, given the values of the parameters F and A, the resource criterion can invalidate some of the solutions. A cost function can be associated to each solution, based on R. Any parameter variation during the service life of the system may invalidate the initial FTM, thus requiring a transition towards a new one. Transitions may be triggered by new threats, resource loss or the introduction of a new application version that changes the initial application characteristics. A particularly interesting adaptation trigger is the fault model change. Incomplete or misunderstood initial fault tolerance requirements, environmental threats such as electromagnetic interference or hardware aging may change the initial model to a more complex one. 4.3. FT Design Patterns and Assumptions To illustrate our approach, we consider some fault tolerance design patterns and briefly discuss their underlying assumptions and resource needs (a full coverage of this point can be found in [15]). Any change that invalidates an assumption or implies an unacceptable resource change calls for an update of the FTMs. Duplex protocols tolerate crash faults using passive (e.g., Primary-Backup Replication denoted PBR), or active replication strategies (e.g., Leader-Follower Replication denoted LFR). In this case, each replica is considered as a self-checking component, the error detection coverage is perfect. The fault model includes hardware faults or random operating system faults (no common mode faults). At least 2 independent processing units are necessary to run this FTM. Two design patterns tolerating transient value faults are briefly discussed here. Time Redundancy (TR) tolerates transient physical faults or random run-time support faults using repetition of the computation and voting. This is a way to improve the self-checking nature of a replica, but it introduces a timing overhead. Assertion & Duplex (A&D) tolerates both transient and permanent faults. It’s a combination of a duplex strategy with the verification using assertions of safety properties that could be violated by a value fault or by a random run-time support error. Such assertions can be user-defined and used to parameterize the FTM. In a certain sense it is a hybrid mechanism, since its overall behavior is customized by application-dependent assertions. Other mechanisms fall in this category, like Recovery Blocks and N-Version programming. Adjudicators and multiple Versions are examples of user-defined software blocks used in these generic fault tolerance design patterns. In the work reported in this paper, we use simple implementations of a sub-set of FTMs (see Table I). More complex implementations have been proposed in other works, as described in [16]. <table> <thead> <tr> <th>Assumptions / FTM</th> <th>PBR</th> <th>LFR</th> <th>TR</th> <th>A&amp;D</th> </tr> </thead> <tbody> <tr> <td>Fault Model (F)</td> <td>Crash</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td></td> <td>Transient</td> <td>√</td> <td>√</td> <td>√</td> </tr> <tr> <td>Application Characteristics (A)</td> <td>Deterministic</td> <td>√</td> <td>√</td> <td>(√)</td> </tr> <tr> <td></td> <td>State Access</td> <td>√</td> <td>√</td> <td>(√)</td> </tr> <tr> <td>Resources (R)</td> <td>Bandwidth</td> <td>high</td> <td>low</td> <td>nil (TDB)</td> </tr> <tr> <td></td> <td># CPU</td> <td>2</td> <td>2</td> <td>1</td> </tr> </tbody> </table> Table I. Assumptions and fault tolerance design patterns characteristics The underlying characteristics of the considered FTMs, in terms of (F, A, R), are shown in Table I. For instance, PBR and LFR tolerate the same fault model, but have different A assumptions and R needs. PBR allows non-determinism of applications execution because only the Primary computes client requests while LFR only works for deterministic applications as both replicas compute all requests. LFR could tackle non-determinism if the application was not considered a black-box, as in our approach. PBR requires state access for checkpoints and higher network bandwidth (in general), while LFR does not require state access but generally incurs higher CPU costs (and, consequently, higher energy consumption) as both replicas perform all computations. During the service life of the system, the values of the parameters enumerated in Figure 1 can change. An application can become non-deterministic because a new version is installed. The fault model can become more complex, e.g., from crash-only it can become crash and value fault due to hardware aging or physical perturbations. Available resources can also vary, e.g., bandwidth drop or constraints in energy consumption. For instance, the PBR→LFR transition is triggered by a change in application characteristics (e.g., inability to access application state) or in resources (bandwidth drop), while the PBR→A&D transition is triggered by a change in the considered fault model (e.g., safety property verification). Transitions can occur in both directions, according to parameter variation. The priority is the fault model, the selection of the solution (i.e., the composition of several FTMs) depending on the application characteristics and the available resources. The final objective is always to comply with the dependability properties during the service lifetime. 4.4. Design for adaptation of FTMs Our design for adaptation aims at producing reusable elementary components that can be combined to implement a given fault tolerance or safety mechanism. Any FTM follows the generic Before-Proceed-After meta-model. Many FTMs can be mapped and combined using this model, as shown in Table II. <table> <thead> <tr> <th>FTM</th> <th>Before</th> <th>Proceed</th> <th>After</th> </tr> </thead> <tbody> <tr> <td>PBR</td> <td>primary Compute</td> <td>Checkpointing</td> <td></td> </tr> <tr> <td></td> <td>backup</td> <td>State update</td> <td></td> </tr> <tr> <td>LFR</td> <td>leader Forward request</td> <td>Compute</td> <td>Notify</td> </tr> <tr> <td></td> <td>follower Handle request</td> <td>Compute</td> <td>Handle Notification</td> </tr> <tr> <td>TR</td> <td>Save/Restore state</td> <td>Compute</td> <td>Compare</td> </tr> <tr> <td>A&amp;D</td> <td>Before</td> <td>Compute</td> <td>Assert</td> </tr> </tbody> </table> Table II. Generic execution scheme for FT design patterns Composition implies nesting the Before-Proceed-After meta-model. This approach improves flexibility, reusability, compositability and reduces development time. Updates are minimized since just few components have to be changed. 4.5. Run-time support The software run-time support must provide key features to manipulate the component graph. Any application or an FTM is perceived as a graph of components. From previous experiments reported in [13], the following primitives are required. - Dynamic creation, deletion of components; - Suspension, activation of components; - Control over interactions between components for the creation and the removal of connections (bindings); Our first implementation was done on a reflective component-based middle-ware, FRASCATI [17], which features a scripting language to manipulate the component graph, FScript [18]. In the following section, we describe how fault tolerance mechanisms can be implemented in ROS [19] in a way that is transparent to applications. Then, in Section 6, we implement the above-described concepts in ROS. 5. ADDING FAULT-TOLERANCE TO ROS Said it concisely, ROS has not been designed to run safety critical systems, despite the fact that robots may be safety critical. Rather, the main goal of ROS is to allow the design of modular applications: a ROS application is a collection of programs, called nodes, interacting only through message passing. Developing an application in ROS involves describing an assembly of nodes, a process that is in line with the component-based architecture we described in the previous section. Such an assembly is referred to as the computation graph of the application. 5.1. Component model and reconfiguration Two communication models are available in ROS: a publisher/subscriber model and a client/server one. The publisher/subscriber model defines one-way, many-to-many, and asynchronous communications through the concept of topics. When a node publishes a message on a topic, it is delivered to every node that has subscribed to this topic. A publisher is not aware of the list of subscribers to its topics and does not know other publishers. The client/server model defines bidirectional transactions (one request/one reply) implemented as synchronous communications through the concept of service. A node providing a service is not aware of the client nodes that may use its service. These high-level communication models ease the addition, substitution, or deletion of nodes in a transparent manner, be it offline or online. To provide this level of abstraction, each ROS application has to include a special node called the ROS Master. It provides registration and lookup services to the other nodes. All nodes register services and topics to the ROS Master. The master is the sole node that has a comprehensive view of the computation graph. When another node issues a service call, it queries the master for the address of the node providing the service, and then sends its request to this address. In order to be able to add fault tolerance mechanisms to an existing ROS application in the most transparent manner, we need to implement interceptors. An interceptor provides a means to insert a functionality, such as a monitoring node, in the invocation path between two ROS nodes. To this end, a relevant ROS feature is its remapping capability. At launch time, it is possible to reconfigure the name of any service or topic used by a node. Thus, requests and replies between nodes can be rerouted to interceptor nodes. 5.2. Implementing a componentized FT design pattern In this section, we first present the generic computation graph we use for FTMs on ROS; then the full implementation on ROS of a duplex FT design pattern, a Primary Backup Replication (PBR) combined with a Time-Redundancy (TR) design pattern is developed. 5.2.1. Generic Computation Graph We have identified a generic pattern for the computation graph of a FTM. Figure 2 shows its application in the context of ROS. Node Client uses a service provided by Server. The FTM computation graph is inserted between the two thanks to the ROS remapping feature. Since Client and Server must be re-launched for the remapping to take effect, the insertion is done offline. ![Figure 2. Generic computation graph for FTM](image) The FTM nodes, topics, and services are generic for every FTM discussed in section II. Implementing a FTM consists in specializing the Before, Proceed, and After nodes with the adequate behavior in the FTM. 5.2.2. Implementing PBR We illustrate the approach, through a Primary-Backup Replication (PBR) mechanism added to a Client/Server application in order to tolerate a crash fault of the Server. Figure 3 presents the associated architecture. Three machines are involved: the CLIENT site, which is hosting the Client node and the ROS Master, the MASTER site hosting the primary replica and the SLAVE site hosting the backup replica. For the sake of clarity, the symmetric topics and services between MASTER and SLAVE are not represented. Elements of the SLAVE are suffixed with "S". We present the behavior of each node, and the topics and services used through a request/reply exchange between a node Client and node Server (see Figure 3): - **Client** sends a request to **Proxy** (service clt2pxy); - **Proxy** adds an identifier to the request and transfers it to **Protocol** (topics pxy2pro); • **Protocol** checks whether it is a duplicate request: if so, it sends directly the stored reply to **Proxy** (topics `pro2pxy`). Otherwise, it sends the request to **Before** (service `pro2bfr`); • **Before** transfers the request for processing to **Proceed** (topics `bfr2prd`); no action is associated in the PBR case, but for other duplex protocol, **Before** may synchronize with the other replicas; • **Proceed** calls the actual service provided by **Server** (service `prd2srv`) and forwards the result to **After** (topics `prd2aft`); • **After** gets the last result from **Proceed**, captures **Server** state by calling the state management service provided by the **Server** (service `aft2srv`), and builds a checkpoint based on this information which it sends to node **After_S** of the other replica (topics `aft2aft_S`); • **Protocol** gets the result (topics `aft2pro`) and sends it to **Proxy** (topics `pro2pxy`); • On the backup replica, **After_S** transfers the last result to its protocol node **Proto_S** (topics `aft2pr_S`) and sets the state of its server to match the primary. In parallel with request processing, the node **crash detector** on the MASTER (noted CD) periodically gives a proof of life to the **crash detector** (CD_S) on the SLAVE to assert its liveness (topics `CD2CD_S`). If a crash is detected, then the **crash detector** of the slave notifies the **recovery** node (topics `CD_S2rcy`). This node has two purposes: (i) in order to enforce the fail-silent assumption, it must ensure that every node of the MASTER are removed; (ii) it switches the binding between the Client **Proxy** and the MASTER **Protocol** to the SLAVE **Protocol**. Thus, the SLAVE will receive the Client’s requests and will act as the Primary, changing its operating mode. ROS does not provide a command to change bindings between nodes after their initialization. The node developer must implement the transition logic. The SLAVE **Protocol** spins waiting for a notification from **Recovery** (topic `rcy2pro_S`). This is done using the ROS API: background threads, within a node, check for messages independently of the nodes main functionality. Upon reception of this topic, the SLAVE **Protocol** subscribes to topic `pxy2pro` and publishes to topic `pro2pxy`. After this transition, the proxy forwards the Clients requests to the SLAVE **Protocol**. **Figure 3. Computation graph of a PBR mechanism** 5.2.3. **Impact on existing application** From the designer viewpoint, there are two changes required to integrate a FTM computation graph to its application. First, Client will have to be remapped offline to call the proxy nodes service instead of directly the **Server**. Second, state management services, to get and set the state of the node, must be integrated to the **Server**. From an object-oriented viewpoint any server inherits from an abstract class `StateManager` providing two virtual methods, `getState` and `setState`, overridden during the server development. 5.3. Composition of FT mechanisms The generic computation graph for FTM is designed for composability. In this section, the composition scenario is two-fold. We first illustrate the composition of two FTMIs, PBR for crash faults and TR for transient value faults. Initially the application was installed with PBR. From an operational standpoint, at a given point in time, transient faults impacting numerical calculations appeared due to hardware components aging or sudden increase of environmental radiations. In a second step, later on, we consider that the communication channel between client and server can be the target for intrusions. Cryptographic protocols, based for instance on a simple Public Key Infrastructure (PKI), can be used to cipher communications and add cryptographic signatures. With respect to request processing, a Protocol node and a Proceed node present the same interfaces: a request as input, a reply as output. Hence, a way to compose mechanisms is to substitute the Proceed node of a mechanism by a Protocol and its associated Before/Proceed/After nodes, as shown in Figure 4. Our approach enables developing a new mechanism on the foundation of several existing ones. This improves the development time and the assurance in the overall system, since all mechanisms have been validated off-line by test and fault injection techniques. ![Figure 4. Principle of composition for FT mechanisms](image) 5.3.1. Composition of PBR and TR The composition of PBR with TR can be triggered by a change in the fault model F. Let suppose that, at a given point in time during the system lifetime, transient faults need to be tolerated because of hardware aging or due to some changes in the run-time environment, like electromagnetic perturbations. The architecture of the composite FTM made of PBR and TR is given in Figure 5. This figure is an extension of Figure 3 where the Proceed node of the PBR has been replaced with the Protocol node of the TR implementation. 5.3.2. Composing FTMs with Cryptographic protocols Suppose now that some passive attacks are considered in the fault model F, requiring thus the inclusion of some ciphering mechanisms, in addition to the crash and transient fault tolerance mechanisms. The generic computation graph presented in Figure 2 enables cryptographic protocols to be seamlessly added to an application, already equipped with accidental fault tolerance mechanisms, PBR and TR in our example. The cryptographic mechanism (called SEC for security) is located at both the client (SEC_C) and the server side (SEC_S) as shown in Figure 6. On the server side, SEC operates before PBR and TR. In this example, we only deal with possible intrusions between the client and the server. We assume that a node implements the Certification Authority (CA). Three topics are used to communicate with the CA, namely Cli2CA for the Client, Master2CA for the Master and Slave2CA for the Slave. The topic Cli2CA enables the Before node of the Client to collect the certificate of the Server. Similarly, the topic Master2CA and Slave2CA enable Before of the Master, respectively the Slave, to collect the certificate of the Client. We assume that all parties know CA’s public key. We assume that, for each participant, Client or Server, Before and After of the SEC mechanism share the pair of private and public keys they received when initialized. - Before of the Client can cipher the request with $K^S_{pub}$, the Server’s public key, and adds a signature, using $K^C_{priv}$ the Client’s private key; Using the generic scheme given in Figure 6, a message is sent by the client to the server side through a new topic (called Client2Server) connecting Before of SEC_C to Protocol of SEC_S. Before of the Master deciphers the request with $R^S_{priv}$, the Server’s private key, and checks the signature, using $R^C_{pub}$, the Client’s public key; The Server can then proceed with a valid deciphered request through PBR and TR. Conversely, After of the Master ciphers the reply and computes a signature. After of the Client deciphers the reply, checks the signature, and finally delivers the reply to the Client. The communication between Master and Slave can also be secured using a similar security protocol. 6. DYNAMIC ADAPTATION: TO WHAT EXTENT WITH ROS 6.1. FTM Adaptation principles and ROS Dynamic adaptation requires remote loading and removal of individual elements of a software component architecture, and dynamic binding facilities to reorganize a graph of components. It also requires control features to suspend and activate individual components. To what extent ROS provides such features to safely adapt an FTM at runtime? We have considered three types of adaptations: i) updating the current FTM, for instance updating the inter-replica synchronization protocol, ii) switching from one FTM to another because some dramatic change occurred in the fault model, or because an application update leads to new application characteristics, and iii) composing two FTM, for instance because the fault model has been extended to consider other types of faults. We recall that the design, development, and validation of a new FTM configuration is performed off-line. The first type of adaptation implies a revision of the design or the implementation of the FTM. The other two are used to comply with parameters evolution (F, A or R). In all cases, the same features are required. Some are provided by ROS or by the underlying OS and some have been developed in-house. A set of minimal API required to guarantee the consistency of the transition between two different FTM has been established in previous work [13]: - Control over components life cycle at runtime (add, remove, start, stop). - Control over interactions between components at runtime, for creating or removing bindings. Furthermore, ensuring consistency before, during and after reconfiguration, requires that no requests or replies are lost: - Components have to be stopped in a quiescent state, i.e. when all internal processing has completed. - Incoming requests on stopped components must be buffered. ROS provides means to add and remove nodes, to buffer messages, and to control binding when a node is launched (using ROS remapping capability as presented in section 5). There is no ROS command to start or stop a node. Also ROS does not provide API to control the bindings of a node at runtime. However, the good news is that these APIs can be emulated with dedicated logic added to some nodes. For instance, this is what we use to control the bindings in the Primary-Backup Replication to switch to the Backup when the primary fails. To analyse to what extend run-time adaptation is possible with ROS, we need to describe in more details how topics work. Topics are the central concept in the publish/subscribe communication model used in ROS. A Topic is defined by: - A name: ports are connected through a named Topic. - Sending ports: used by publishers to send messages. - Receiving ports: used by subscribers to receive messages. - A data type: a unique data type is assigned to a topic for messages. In ROS, when a node wants to publish or subscribe to a topic, it uses methods provided by the NodeHandle. The NodeHandle is an object instantiated in each ROS node and serves as the main interface to interact with the ROS master from within a node. The NodeHandle manages the start and the shutdown of the node. It also manages the instantiation of the sending and receiving ports. Creating a publisher or a subscriber is done in the following manner: - The NodeHandle instantiation: ```cpp ros::NodeHandle nh ``` • Publisher instantiation: ``` ros::Publisher pub = nh.advertise<Data_type>("topic_name", queue_size) ``` • Subscriber instantiation: ``` ros::Subscriber sub = nh.subscribe("topic_name", queue_size, callback_function) ``` Publishers and Subscribers are ROS objects. The callback function is triggered by the reception of a message and includes the data type as an argument. ROS allows to remap the names of Topics a node uses, by substituting the name of the Topics hard coded in the node by new names provided as parameters of the command launching the node. Therefore, when a new node is launched, we are able to reconfigure the Topics of this node to communicate through any Topic matching the data type of its initial topic. Remapping arguments can be passed to any node and use the syntax `topic_name:=newname`. For example, a `Protocol` node which subscribed to a Topic named "pxy2pro" can be remapped at initialization to subscribe to an existing Topic "bfr2prd" by using two methods: - either using an XML script: ``` <node pkg="package name" type="node type" name="node name"> <remap from="initial topic name" to="final topic name"/> ``` - or, with the user command line: ``` rosrun package node initialTopicName:=finalTopicName ``` With this ROS features we can launch and link nodes to the graph of component so we can adapt or compose FTM. We illustrate adaptation through a composition example. Our FTM architecture is designed for compositability. With respect to request processing, a `Protocol` node and a `Proceed` node present the same interfaces: a request as input, a reply as output. Hence, a way to compose mechanisms is to substitute the `Proceed` node of a mechanism by a `Protocol` and its associated `Before/Proceed/After` nodes, as shown in Fig. 2 and Fig. 6. Since ROS does not provide services to manipulate a component graph at runtime, we have developed an `Adaptation Engine` node. Its purpose is to run a script controlling the adaptation of an FTM. For instance, the composition of a PBR with a TR mechanism goes through the following steps: - The Primary `Protocol` is suspended using the Unix signal SIGSTOP; - The `Proceed` node is killed using a ROS command: ``` rosnode kill Primary/Proceed ``` - The TR nodes (Protocol-B-P-A) are launched (on each replicas) using a script in XML and a ROS command: `roslaunch TR TR.launch`; - The TR `Protocol` links itself to the PBR `Before` topic and the PBR `After` one using the Topic names parameters provided in the TR.launch script; - The Primary `Protocol` is restarted using the Unix signal SIGCONT. Note that ROS ensures that messages are not lost during adaptation. A publisher node buffers all on-going messages until all its subscriber nodes read them. Thus stopping a node is safe with respect to communication. The other types of adaptation are based on a similar sequence of steps: suspend, substitute, link, and restart. For an update, only one node may be replaced. For a transition between two mechanisms only the `Before` and `After` nodes need to be changed. With the above described ROS/Unix features, we are able to compose or adapt our FTM. However, we cannot dynamically adapt the communication between two nodes at run-time. The following section describes how we overcome this limitation. ### 6.2. Implementing Dynamic Binding on ROS Dynamic binding is the ability to configure on-line the communications between two nodes. It is an important feature for AFT in order to manipulate the graph of components. Indeed, we need to be able to manage the nodes but also the communications between them. Thus, the dynamic binding consists in being able to manage the communication of nodes. Dynamic binding is crucial to the proposed architecture of FTMs. A good example of dynamic binding usage is the transfer of the connection linking the Client to the Primary to the Backup when the Primary crashes (see section 5.2.2 – Implementing PBR). We cannot kill and relaunch the Backup nodes (loss of its internal state) therefore we cannot use the remapping at initialization. The Topic on which the Client publishes still exists after the crash of the Primary. We need to instantiate the communication ports of the Backup to communicate on this existing Topic. We have added to the node a function in order to control the instantiation of the communication ports. New data types for topics cannot be defined at run-time, however new topics based on pre-defined data types can be instantiated. For example, we have implemented a service (simplified in Fig. 7) to activate or deactivate the communication between the Backup and the Client at runtime. ```c bool recover(Request &req, Response &res) { // deactivation of the ports if (req.activation == 0) { pub.shutdown(); sub.shutdown(); } // instantiation of the ports to reconnect the node else if (req.activation == 1) { pub = nh.advertise<Data_type>(req.topic_pub, req.queue_size); sub = nh.subscribe(req.topic_sub, req.queue_size, callback); } return true } ``` Figure 7. Example of a dynamic binding service This service is triggered by an external node, in this example the Recovery Node. The input request must contain multiple parameters such as Topic name, publish or subscribe, activation or deactivation. We choose to use a service (synchronous message) to have an acknowledgment of the correct service execution. When the crash of the Primary is detected, the Recovery Node call the service implemented in the Backup and thus the connection to the Client is dynamically established, without using remapping at initialization. In Fig. 7 the function recover reinitializes the publisher or the subscriber to manage the dynamic binding with an external node. The function has two objectives: i) to shutdown a port (this function will use ROS API pub.shutdown() or sub.shutdown()) and ii) to initialize the port (this function will use ROS API advertise or subscribe presented in 6.1). In any case, an external node is mandatory to trigger the function and to pass to the node the various parameters it needs. In our example, we chose to use an existing Topic to bind the Client and the Backup. Thanks to this approach it is possible to create a totally new Topic between them. In summary our dynamic binding approach enables solving two situations: 1. Activation/shutdown of a Topic in an existing node (switch Primary/Back-Up) 2. The insertion of a node between two communicating ones (insertion of the FTM) In our prototype, AFT is realized through a combination of ROS features, Unix features, and some custom services. In particular, a nodes life cycle (stop, start) is controlled directly through UNIX signals. Dynamic binding is achieved through implementation of custom methods in the nodes and through external nodes, here the Adaptation Engine or the Recovery node, to orchestrate the adaptation. In conclusion, even if ROS lacks some essential features, AFT is possible with ROS. 7. LESSONS LEARNED SUMMARY The general requirements for an executive support suitable for implementing AFT we have exhibited in former work relies on the following features: i) control over component’s life cycle at run-time (add, remove, start, stop), ii) control over interactions at run-time for creating or removing bindings. In addition to separation of concerns, these features are related to the degree of observability and control the software platform provides over the component-based architecture of applications (including FTM) at run-time. Furthermore, to ensure consistency before, during and after reconfiguration of the component-based software architecture, several issues must be carefully considered: i) components must be stopped in a quiescent state, i.e., when all internal processing has finished, ii) incoming requests on stopped component must be buffered. This specification is our frame of reference to discuss the adequacy of ROS as a run-time support for AFT. In our experiments, a component was mapped to a node at run-time providing memory space segregation. The binding between components relied on topics managed by the ROS Master. Dynamic binding was possible but ROS does not provide a specific API to manage such connection between components. As we have seen in the previous section, additional code is required to manage dynamic bindings, using facilities provided by the underlying Linux operating system. Control over component’s life cycle: - ROS provides commands to delete and create nodes - Thanks to UNIX commands nodes can be stopped and restarted Control over interactions at run-time: - ROS enables nodes to connect or disconnect to/from topics - A specific service must be added to all nodes to trigger these connections/disconnections - The topics a node can connect or disconnect to/from is defined at the initialization of the node - ROS enables new topics to be created - Topics store outgoing messages when subscribers are not available Regarding the features of ROS for implementing AFT, we consider them not entirely satisfactory, as ROS does not provide dynamic binding between nodes, and the API to control components lifecycle at run-time is too weak. However, although imperfect, resilient computing using AFT can be implemented on ROS. Dynamic binding is possible on existing topics by adding some specific services in the nodes. For new topics, a customized solution was proposed in this work. ROS provides separation of concerns, since components can be mapped to nodes (Unix processes) that have their own address space. The model of communication used in ROS is also a benefit to design and implement resilient distributed application. It is worth noting that, as soon as some change is identified, the adaptation of the FTM attached to the application is carried out off-line and by the way validated according to the development process standard in the domain (automotive/ISO26262, aerospace/DO178C, IEC61508, etc.). The dynamic adaptation of the mechanisms is a service to avoid complete reload of the system. Using ROS in a dependable and resilient system is hindered by the fact that the ROS master is a single point of failure in the architecture. The ROS Master must be operational when installing an application and during its execution. When the ROS master fails, the whole software architecture must be restarted. We are currently investigating a replicated implementation of the ROS master using the DMTCP (*Distributed MultiThreaded Checkpointing*) library developed at NorthEastern University Boston [20]. This is however very complex and having multiple ROS masters running in parallel is currently not possible. For the time being our software architecture, as any ROS application, is linked to a unique ROS Master. This problem should be solved by the ROS community and this is something that should be addressed in ROS 2. Indeed, the next major revision of ROS (ROS2) is based on a DDS (*Data Distribution Service*) communication system that should help solving this problem by distributing the ROS master functionalities among the nodes of the system. This approach would however require reliable multicast protocols properly implemented and validated. 8. CONCLUSION The adaptation of embedded application requires an adequate run-time support. Beyond design for adaptation issues that relate more to the development process, the run-time support must fulfill 5 requirements: (i) separation of concerns, (ii) componentization, (iii) component mapping to tasks. The last 2 criteria relate to the dynamic adaptation of the software on-line: (iv) dynamic binding and (v) control over components. ROS enables the 3 first requirements to be satisfied, but fails to provides efficient solution for the last two. On-line adaptation is possible as demonstrated in this paper. We have been able to overcome the limitations of ROS thanks to underlying OS features and some additional logic implemented into the nodes. As a run-time support for resilient computing, ROS is an interesting development platform to test concepts for Adaptive Fault Tolerance. The mapping of components to ROS is simple (component → node) and on-line modification of FTMs during the lifetime of the system is possible. The insights gained with this work should help to develop a suitable run-time support for Adaptive Fault Tolerance in the context of safety critical real-time systems. Our current work is done in collaboration with Renault-Nissan Group, especially targeting remote updates for ADAS. The basic principles of our approach are consistent with the framework proposed in Adaptive AUTOSAR. The basic operating system is based on Posix and several services are defined to master adaptation, like the Software Configuration Management service and the Platform Health Management, which can be related to our Adaptation Engine. We believe that a run-time support like Linux or any Posix-based OS is not dynamic enough to implement fine-grained adaptation. ROS is providing an additional layer to this aim as a middle-ware, but the granularity remains coarse and the dynamic binding is still difficult to handle. We hope that ROS2 will provide a more powerful and reliable platform, using DDS (Data Distribution Service) for industrial applications. Solving the dynamic binding problem implies revisiting the publish-subscribe implementation to manipulate communication channels at run-time. Finally, the middle-ware should provide additional features to suspend, activate run-time entities, save/restore internal state and buffer inter-entities communications. In the next future, we plan to address those issues, taking advantage of the development of the Adaptive AUTOSAR Platform. REFERENCES
{"Source-Url": "https://hal.laas.fr/hal-01703968/document", "len_cl100k_base": 12019, "olmocr-version": "0.1.53", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 52904, "total-output-tokens": 14350, "length": "2e13", "weborganizer": {"__label__adult": 0.0003504753112792969, "__label__art_design": 0.0004622936248779297, "__label__crime_law": 0.000335693359375, "__label__education_jobs": 0.0006413459777832031, "__label__entertainment": 8.803606033325195e-05, "__label__fashion_beauty": 0.00019609928131103516, "__label__finance_business": 0.00034427642822265625, "__label__food_dining": 0.00033974647521972656, "__label__games": 0.0007233619689941406, "__label__hardware": 0.002044677734375, "__label__health": 0.0005917549133300781, "__label__history": 0.0003695487976074219, "__label__home_hobbies": 0.0001036524772644043, "__label__industrial": 0.0006213188171386719, "__label__literature": 0.0003082752227783203, "__label__politics": 0.0003120899200439453, "__label__religion": 0.0005731582641601562, "__label__science_tech": 0.092529296875, "__label__social_life": 7.56382942199707e-05, "__label__software": 0.00963592529296875, "__label__software_dev": 0.88818359375, "__label__sports_fitness": 0.0002765655517578125, "__label__transportation": 0.0008330345153808594, "__label__travel": 0.0002315044403076172}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 63092, 0.01822]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 63092, 0.42129]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 63092, 0.90054]], "google_gemma-3-12b-it_contains_pii": [[0, 1077, false], [1077, 3882, null], [3882, 8440, null], [8440, 13026, null], [13026, 17076, null], [17076, 19387, null], [19387, 23743, null], [23743, 27833, null], [27833, 30985, null], [30985, 34128, null], [34128, 37141, null], [37141, 40699, null], [40699, 41411, null], [41411, 44824, null], [44824, 48408, null], [48408, 51895, null], [51895, 56142, null], [56142, 61016, null], [61016, 63092, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1077, true], [1077, 3882, null], [3882, 8440, null], [8440, 13026, null], [13026, 17076, null], [17076, 19387, null], [19387, 23743, null], [23743, 27833, null], [27833, 30985, null], [30985, 34128, null], [34128, 37141, null], [37141, 40699, null], [40699, 41411, null], [41411, 44824, null], [44824, 48408, null], [48408, 51895, null], [51895, 56142, null], [56142, 61016, null], [61016, 63092, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 63092, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 63092, null]], "pdf_page_numbers": [[0, 1077, 1], [1077, 3882, 2], [3882, 8440, 3], [8440, 13026, 4], [13026, 17076, 5], [17076, 19387, 6], [19387, 23743, 7], [23743, 27833, 8], [27833, 30985, 9], [30985, 34128, 10], [34128, 37141, 11], [37141, 40699, 12], [40699, 41411, 13], [41411, 44824, 14], [44824, 48408, 15], [48408, 51895, 16], [51895, 56142, 17], [56142, 61016, 18], [61016, 63092, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 63092, 0.05714]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
4f0ee9ca70c7aba7864b2e3d76ea20c319ddaa3f
[REMOVED]
{"Source-Url": "https://www.janclaes.info/pdf/ClaesEA2012BPM.pdf", "len_cl100k_base": 8293, "olmocr-version": "0.1.53", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 48275, "total-output-tokens": 10607, "length": "2e13", "weborganizer": {"__label__adult": 0.000545501708984375, "__label__art_design": 0.00238037109375, "__label__crime_law": 0.0005664825439453125, "__label__education_jobs": 0.057708740234375, "__label__entertainment": 0.0002340078353881836, "__label__fashion_beauty": 0.0004048347473144531, "__label__finance_business": 0.0066986083984375, "__label__food_dining": 0.0007805824279785156, "__label__games": 0.0009908676147460938, "__label__hardware": 0.0010766983032226562, "__label__health": 0.0011806488037109375, "__label__history": 0.00106048583984375, "__label__home_hobbies": 0.0004329681396484375, "__label__industrial": 0.001827239990234375, "__label__literature": 0.0020885467529296875, "__label__politics": 0.00045013427734375, "__label__religion": 0.0007319450378417969, "__label__science_tech": 0.418212890625, "__label__social_life": 0.0005145072937011719, "__label__software": 0.029998779296875, "__label__software_dev": 0.47021484375, "__label__sports_fitness": 0.00037980079650878906, "__label__transportation": 0.001312255859375, "__label__travel": 0.0003192424774169922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 43651, 0.03208]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 43651, 0.26988]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 43651, 0.91545]], "google_gemma-3-12b-it_contains_pii": [[0, 2199, false], [2199, 5413, null], [5413, 8771, null], [8771, 11770, null], [11770, 13314, null], [13314, 15198, null], [15198, 18492, null], [18492, 21692, null], [21692, 24167, null], [24167, 26273, null], [26273, 28954, null], [28954, 30784, null], [30784, 34065, null], [34065, 37371, null], [37371, 40395, null], [40395, 43651, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2199, true], [2199, 5413, null], [5413, 8771, null], [8771, 11770, null], [11770, 13314, null], [13314, 15198, null], [15198, 18492, null], [18492, 21692, null], [21692, 24167, null], [24167, 26273, null], [26273, 28954, null], [28954, 30784, null], [30784, 34065, null], [34065, 37371, null], [37371, 40395, null], [40395, 43651, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 43651, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 43651, null]], "pdf_page_numbers": [[0, 2199, 1], [2199, 5413, 2], [5413, 8771, 3], [8771, 11770, 4], [11770, 13314, 5], [13314, 15198, 6], [15198, 18492, 7], [18492, 21692, 8], [21692, 24167, 9], [24167, 26273, 10], [26273, 28954, 11], [28954, 30784, 12], [30784, 34065, 13], [34065, 37371, 14], [37371, 40395, 15], [40395, 43651, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 43651, 0.11047]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
4a40b93cdaa6728788c5240993e460593e1ba6c4
Programming Model Elements for Hybrid Collaborative Adaptive Systems Ognjen Scekic*, Tommaso Schiavinotto†, Dimitrios I. Diochnos‡, Michael Rovatsos‡, Hong-Linh Truong*, Iacopo Carreras†, Schahram Dustdar* * Distributed Systems Group, Vienna University of Technology, Austria Email: oscekic | truong | dustdar @dsg.tuwien.ac.at † U-Hopper, Trento, Italy ‡ Centre for Intelligent Systems and their Applications, University of Edinburgh, UK Email: d.diochnos | mrovatso @inf.ed.ac.uk Abstract—Hybrid Diversity-aware Collective Adaptive Systems (HDA-CAS) is a new generation of socio-technical systems where both humans and machine peers complement each other and operate collectively to achieve their goals. These systems are characterized by the fundamental properties of hybridity and collectiveness, hiding from users the complexities associated with managing the collaboration and coordination of hybrid human/machine teams. In this paper we present the key programming elements of the SmartSociety HDA-CAS platform. We first describe the overall platform’s architecture and functionality and then present concrete programming model elements – Collective-based Tasks (CBTs) and Collectives, describe their properties and show how they meet the hybridity and collectiveness requirements. We also describe the associated Java language constructs, and show how concrete use-cases can be encoded with the introduced constructs. I. INTRODUCTION We have recently witnessed the evolution of conventional social computing and appearance of novel types of socio-technical systems, attempting to leverage human experts for more intellectually challenging tasks [1,2,3,4,5]. These types of systems are opening up the possibilities for novel forms of interaction, collaboration and organization of labor where humans and computers complement each other. However, even the cited systems limit themselves to using computers to support and orchestrate purely human collaborations, usually based on patterns of work that can be predictably modeled before the execution (Section VI). The innovative approach considered in this paper implies blurring the line between human and machine computing elements, and considering them under a generic term of peers – entities that provide different functionalities under different contexts; participating in collectives – persistent or short-lived teams of peers, representing the principal entity performing the computation (task). Peers and collectives embody the two fundamental properties of the novel approach: hybridity and collectiveness, offered as inherent features of the system. Systems supporting these properties perform tasks and computations transparently to the user by assembling or provisioning appropriate collectives of peers that will perform the task in a collaborative fashion. We call the whole class of these emerging socio-technical systems HDA-CAS[1]. However, building such systems is a challenging task, requiring solutions that go well beyond traditional coordination and communication problems; especially so, when participating humans are not merely considered as computational nodes providing a service at request, but are put on an equal footing and allowed to actively drive computations. In this paper we present the programming model and associated language constructs for the SmartSociety Platform[2], a novel HDA-CAS supporting a wide spectrum of collaboration scenarios. This paper can be considered a follow-up to the complementary paper [6], which presents the functionality of particular platform components, the overall architecture, and the performance evaluation. The paper describes how the presented programming model design tackles the fundamental HDA-CAS novelty requirements of hybridity and collectiveness and showcases how the introduced language constructs can be used to encode and execute hybrid collaborations on the SmartSociety platform. The paper is organized as follows: In Section II we present the necessary background and the intended usage context of the programming model – the SmartSociety platform. In Section III the principal programming model elements are introduced and their functionality described. Section IV presents the associated language constructs for the SmartSociety Platform and contrasted to our approach. Finally, Section VII concludes the paper and points out directions for future activities. II. BACKGROUND – THE SMARTSOCIETY PLATFORM The SmartSociety platform (platform) [6], shown in Figure 1, is a software framework intended for use by: 1) Users – external human clients or applications who need a complex collaborative human-machine task performed. 2) Peers – human or machine entities participating in task executions managed by a platform application. ‡ The platform is being developed in the context of the EU FP7 research project ‘SmartSociety’ URL [http://www.smart-society-project.eu/](http://www.smart-society-project.eu/) 3) Developers – external individuals providing the business logic in form of programming code that is compiled and executed on the platform as a platform application. The platform acts as intermediary between users and peers, providing a collaborative task execution environment and workforce management functionality. The platform is not limited to a particular class of tasks. Supported task complexity ranges: from simple, independent crowdsourcing tasks (e.g., translation); over inter-dependent complex tasks (e.g., collaborative question answering and refinement); over team-based tasks (e.g., predictive maintenance \[7\]); to the fully human-driven collaborations involving non-trivial execution plans with constraint matching and human negotiations (e.g., ride-sharing). However, implementing the desired collaborative effort specification is entirely left to the developers in the context of a particular SmartSociety platform application. The platform facilitates this process by offering a variety of commonly used coordination, orchestration, communication and adaptation mechanisms as ready-made concepts exposed through the programming API. A. Usage Context & Key Notions Interested human peers register their profiles with the platform and enlist for performing different professional activities. The platform uses this data for locating and engaging peers into different collaborative efforts. In case of human peers, the platform asks for an explicit approval, enabling the peer engagement under a short-term contractual relationship. In case of a software peer, the services are contracted under conventional service-level agreements (SLAs). Registered users are the basis from which appropriate peers are selected into collectives participating in executions of collaborative tasks. A collective is composed of a team of peers along with a collaborative environment assembled for performing a specific task. The collaborative environment consists of a set of software communication and coordination tools. For example, as described in \[7\], the platform is able to set up a predefined virtual communication infrastructure for the collective members and provide access to a shared data repository (e.g., Dropbox folder). The complete collective lifecycle is managed by the platform in the context of a SmartSociety platform application (Fig.1). A platform application consists of different modules, one of which is a SmartSociety program – a compiled module containing the externally provided code that: a) implements the desired business logic of the user; b) manages the communication with the corresponding user applications; and c) relies on libraries implementing the programming model to utilize the full functionality of the platform. Through a corresponding user application users submits task requests to be executed to the platform. The user application communicates with the corresponding platform application. B. Platform Architecture & Functionality A simplified, high-level view of the SmartSociety platform architecture is presented in Fig.1. The rectangle boxes represent the key platform components. The principal component-interoperability channels are denoted with double-headed arrows in the figure. Communication with peers is supported via popular commercial protocols to allow a broader integration with existing communication software and allow easier inclusion of peers into the platform. User applications contact the platform through the REST API component. All incoming user requests are served by this module that verifies their correctness and dispatches them to the appropriate SmartSociety program, which will be processing and responding to them. The program is a Java application making use of SmartSociety platform’s programming model libraries, exposing to the developer the functionality of different platform components. In the remainder of the section, we briefly describe the principal platform components and their functionality, necessary for understanding the subsequently presented design of the programming model. Full details on platform’s architecture and functionality are provided in the complementary paper [6]. **PeerManager (PM):** This is the central peer data-store (peer-store) of the platform. It manages all peer and application information, and allows privacy-aware access and sharing of the peer/collective data among platform components and applications. More details provided here[^1]. **Orchestration Manager (OM):** Each platform application features a dedicated OM component [8]. The OM is the component in charge of preparing and orchestrating collaborative activities among peers. Concretely, this includes the following functionalities, reflected in the programming model and the library language constructs (Section III): - **Discovery**— Provisioning or locating existing human and machine peers appropriate for the given task and forming collectives. - **Composition**— Generating possible execution plans to meet user-set constraints and optimize wanted parameters. - **Negotiation**— Coordinating the negotiation process among human peers leading to the overall agreement and acceptance of the execution plan. - **Execution**— Monitoring the execution of the selected execution plan during the runtime. The OM module implements various algorithms for the above-described functionalities. Discovery can be either performed by actively picking members [9], or by coordinating the process of self-formation of the collective as integral part of the composition and negotiation phases. In the latter case, the OM uses a static decision tree for scheduling the messages of subscription, agreement and withdrawal to the proposed plans originating from human peers [10]. At the moment, during composition the OM generates all possible execution plans that include the participants who satisfy the required constraints. Hence, even if the current approach is not computationally efficient, it suffices as a fully-functional, proof-of-concept implementation. More details on the OM performance are provided in [6]. **Communication and Virtualization Middleware:** The middleware named SMARTCOM is used as the primary means of communication between the platform and the peers, but also among the peers. It supports routing and exchange of messages over different protocols, performing automated message transformations depending on the recipient’s type (human/machine) and supported formats. The virtualization functionality of SMARTCOM assumes rendering uniform the representation and communication with both human and software-based peers to the remainder of the platform. In addition, it can also be used to provide an ad-hoc communication environment for the members of a particular collective. The developer makes use of SMARTCOM indirectly through the provided programming API to communicate with collectives. Internally, the OM also uses SMARTCOM when enacting negotiation protocols. Fig. 2: Using the SmartSociety programming model. ### III. Programming Model Figure 2 illustrates the intended usage of the programming model. The developer writes a SmartSociety program performing arbitrary business logic and handling the interaction with user applications. When a task requiring collaborative hybrid processing is needed, the developer uses the programming model constructs to create and concurrently execute a **Collective-based Task (CBT)** — an object encapsulating all the necessary logic for managing complex collective-related operations: team provisioning and assembly, execution plan composition, human participation negotiations, and finally the execution itself. These operations are provided by various SmartSociety platform components, which expose a set of APIs used by the programming model libraries. During the lifetime of a CBT, various **Collectives** related to the CBT are created and exposed to the developer for further (arbitrary) use in the remainder of the code, even outside of the context of the originating CBT or its lifespan. This allows the developer to communicate directly with the collective members, monitor and incentivize them, but also to use existing collectives to produce new ones, persist them, and pass them as inputs to other CBTs at a later point. In the remainder of the section, we will look in more detail into the design and functionality offered by CBT and Collective constructs. **A. Collective-Based Tasks (CBT)** A collective-based task (CBT) is the element of the programming model keeping the state and managing the lifecycle of a collective task. A CBT instance is always associated with a **TaskRequest** containing input data and possibly a **TaskResult** containing the outcome of the task (cf. Fig 2). Both are very generic interfaces meant to hide from the programming model the application-specific format of the input and output data, respectively. In fact, the programming model is designed to be **task-agnostic**. This is in line with the general HDA-CAS principle that unconstrained collaboration should be supported and preferred when possible. This design choice was made to allow subsequent support of different task models which will be interpretable by the application-specific Orchestration Manager, or by human peers directly. A CBT can be processed purely in one of the two collaboration models – (on demand and open call); or a combination of the two, as specified by the developer upon instantiation. Table 1 lists the allowed combinations and describes them in more detail (also compare with Fig. 1). <table> <thead> <tr> <th>on_demand = true ∧ open_call = true</th> </tr> </thead> <tbody> <tr> <td>A collective of possible peers is first provisioned, then a set of possible execution plans is generated. The peers are then asked to negotiate on them, ultimately accepting one or failing (and possibly re-trying). The set of peers to execute the plan is a subset of the provisioned collective but established only at runtime.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>on_demand = true ∧ open_call = false</th> </tr> </thead> <tbody> <tr> <td>The expectedly optimal collective peers is provisioned, and given the task to execute. The task execution plan is implicitly assumed, or known before runtime. Therefore no composition is performed. Negotiation is trivial: accepting or rejecting the task.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>on_demand = false ∧ open_call = true</th> </tr> </thead> <tbody> <tr> <td>&quot;Continuous orchestration&quot;. No platform-driven provisioning takes place. The entire orchestration is fully peer-driven (by arbitrarily distributed arrivals of peer/user requests). The platform only manages and coordinates this process. Therefore, neither the composition of the collective, nor the execution plan can be known in advance, and vary in time, until either the final (binding) agreement is made, or the orchestration permanently fails due to non-fulfillment of some critical constraint (e.g., timeout). Note that in this case the repetition of the process makes no sense, as the process lasts until either success or ultimate canceling/failure.</td> </tr> </tbody> </table> <table> <thead> <tr> <th>on_demand = false ∧ open_call = false</th> </tr> </thead> <tbody> <tr> <td>Not allowed/applicable.</td> </tr> </tbody> </table> TABLE 1: CBT collaboration models and selection flags At CBT’s core is a state machine (Fig. 3) driven by an indepent execution thread managing transitions between states representing the eponymous phases of the task’s lifecycle: provisioning, composition, negotiation and execution. An additional state, named continuous_orchestration, is used to represent a process combining composition and negotiation under specific conditions, as explained in Table 1. The collaboration model selection flags are used in state transition guards to skip certain states. Each state consumes and produces input/output collectives during its execution. All these collectives get exposed to the developer through appropriate language constructs (Listing 2) and are subsequently usable in general program logic. Each state is associated with a set of handlers with predefined APIs that needs to be executed upon entering the state in a specific order. The handlers registered for a specific application are assumed to know how to interpret and produce correct formats of input and output data, and wrap them into TaskRequest and TaskResult objects. By registering different handler instances for the states the developer can obtain different overall execution of the CBT. For example, one of the handlers associated with the execution state is the ‘QoR’ (quality of result) handler. By switching between different handler instances, we can produce different outcomes of the execution phase. Similarly, by registering a different handler, an OM instance with different parameters can be used. This feature is used to implement adaptation policies (Sec. III-B). The programming model provides default dummy handlers. In addition, the aim is to provide to the developer a library of useful, precompiled handlers exploiting the full functionality of the various components of the SmartSociety platform, such as orchestration and negotiation algorithms provided by the Orchestration Manager, or external provisioning algorithms (e.g., [9]). Concrete handlers are pre-registered for each CBT type exposed to the developer. **Provisioning state:** The input to the state is the CBT input collective specified at CBT instantiation (most commonly a predefined collective representing all the peers accessible to the application). In our case, the process of provisioning refers to finding a set of human or machine peers that can support the computation, while being optimized on e.g., highest aggregate set of skills, or lowest aggregate price. See [9] for examples of possible provisioning algorithms. Provisioning is crucial in supporting hybridity in the programming model, because it shifts the responsibility of explicitly specifying peer types or individual peers at design time from the developer onto the provisioning algorithms executed at runtime, thus making both human and machine-based peers eligible depending on the current availability of the peers and the developer-specified constraints. The bootstrapping aspect of provisioning refers to finding and starting a software service, or inviting a human expert to sign up for the participation in the upcoming computation; and setting up the communication topology (e.g., a shared Dropbox folder) and communication policies among them. Details of how this is achieved are provided in [7]. The output of the state is the ‘provisioned’ collective, that gets passed on to the next state during the execution. ![Fig. 3: CBT state diagram.](http://www.smart-society-project.eu/publications/deliverables/D_6_1/) the output is a list of collectives ‘negotiables’, associated with composed execution plans, which get passed on to the following state. **Negotiation state:** Involves selecting one or more execution plans passed as inputs from the composition state and enacting a negotiation process on them. If the state is entered directly from the provisioning state, the execution plan is implied, and assumed to be implicitly understood by participating peers. The negotiation is a complex collaborative process involving human peers, members of the collective associated with the plan, expressing their participation conditions and (potential) participation acceptance. How exactly a negotiating process unfolds is guided by the negotiating pattern specified by the developer. For example, the pattern may stipulate that at a given time only one plan can be actively negotiated, and that the participation in this plan must be reached through the consensus of all peers belonging to the associated collective. An alternative pattern may allow negotiation of multiple plans in parallel, and termination of the negotiation process as soon as one plan is accepted by a simple majority. The negotiation patterns currently offered by the platform and through the programming model libraries are described here\(^5\). The output of the negotiation process is the single ‘agreed’ collective and the associated execution plan. **Continuous orchestration state:** Continuous orchestration (cf. Table\(^5\)) does not separate composition and negotiation, but rather allows continuous switching between (re-)composing and negotiating. Each new task request submitted by user re-triggers composition, allowing the peers to temporarily accept plans and later withdraw, until the plan is ultimately considered accepted and thus becomes ready for execution, or ultimately fails. Note that repetition of this state is not applicable, because repetition is generally done in case of remediable failures, but in this case the orchestration lasts until the execution starts (a non-revocable success) or a non-revocable failure is detected (e.g., a ride to work makes no sense after working hours have already begun). As continuous orchestration is completely human-driven, the developer is expected to provide only the input collective while the planning and negotiations are handled by the peers. The output is the ‘agreed’ collective (a subset of the input one) and the associated execution plan. As an example of real-world continuous orchestration, assume a ride sharing scenario: users submit driving offers, peers submit passenger offers. An execution plan in this case is the description of the possible route of the ride along with information which section is driven by which vehicle/driver and with which passengers. If enough requests are submitted, a number of plans matching hard (time/destination) constraints are generated. However, a number of soft constraints influence the human negotiations: drivers prefer different passengers (due to personal preferences or the price they offer); passengers prefer different routes depending on the vehicles, fellow-passengers, ride cost/duration and the number of transfers. All potential driver/passenger peers are allowed to participate in negotiations for multiple plans in parallel, and accepting and withdrawing from multiple plans while they are valid. As soon as all required peers accept it, the plan is considered agreed. However, the plan can exist in agreed state, but still revert to non-agreed if some peer changes his mind before the actual execution takes place. Furthermore, this affects other plans: if a passenger commits to participating in ride A, then ride B may become non-agreed if his presence was a required condition for executing the ride B. When the actual plan (ride) finally starts executing, or its scheduled time is reached, the plan is non-revokable; if it is in addition in agreed state, it can get executed. Otherwise, the orch_fail state is entered. More details provided here\(^6\). **Execution state:** The execution state handles the actual processing of the agreed execution plan by the ‘agreed’ collective. In line with the general HDA-CAS guidelines, this process is willingly made highly independent of the developer and the programming model and let be driven autonomously by the collective’s member peers. Since peers can be either human or software agents, the execution may be either loosely orchestrated by human peer member(s), or executed as a traditional workflow, depending on what the state’s handlers stipulate. For example, in the simplified collaborative software development scenario shown in Listing\(^2\) both CBTs are executed by purely human-composed collectives. However, the testTask CBT could have been initialized with a different type, implying an execution handler using a software peer to execute a test suite on the software artifact previously produced by the progTask CBT. Whether the developer will choose software or human-driven execution CBTs depends primarily on the nature of the task, but also on the expected execution duration, quality and reliability. In either case, the developer is limited to declaratively specifying the CBT’s type (handlers), the required the termination criterion and the Quality of Results (QoR) expectations. The state is exited when the termination criterion evaluates to true. The outcome is ‘success’ or ‘failure’ based on the value of QoR metric. In either case, the developer can fetch the TaskResult object, containing the outcome, and the evaluation of the acceptability of the task’s quality. **Fail states:** Each of the principal states has a dedicated failure state. Different failure states are introduced so that certain states can be re-entered, depending on what the selected adaptation policy (Sec. III-B) specifies. Some failure states react only to specific adaptation policies; some to none. **B. Adaptation policies** An adaptation policy is used to enable re-doing of a particular subset of CBT’s general workflow with different functionality and parameters, by changing/re-attaching different/new handlers to the CBT’s states, and enabling transitions from the failure states back to active states. The policies are triggered upon entering failure states, as shown in Figure\(^3\). The possible transitions are marked with dotted lines in the state diagram, as certain policies make sense only in certain fail states. Adaptation policies allow for completely changing the way a state is executed. For example, by registering a new handler for the provisioning state a different provisioning algorithm can be used. Similarly, a new handler installed by the adaptation policy can in a repeated negotiation attempt use the “majority vote” pattern for reaching a decision, instead of the previous “consensus” pattern. Since concrete adaptation policies are meant to extend the functionality of the programming model they are usually context-specific. Therefore, the programming model limits itself to offering the mechanism of \(^5\)http://www.smart-society-project.eu/publications/deliverables/D_6_2/ extending the overall functionality through external policies and itself offers for each failure state only a limited set of simple, generally applicable predefined policies. In order to be general, predefined policies assume re-using existing handlers. Natively supported predefined policies are described in Table I. Only a single adaptation policy is applicable in a single failure state at a given time. If no policy is specified by the developer, the ABORT policy is assumed (shown as full-line transition in CBT state machine diagram). <table> <thead> <tr> <th>Adaptation policy</th> <th>Description</th> </tr> </thead> <tbody> <tr> <td>ABORT</td> <td>Default. Do nothing, and let the fail state lead to total failure.</td> </tr> <tr> <td>REPEAT</td> <td>Repeats the corresponding active state, with (optionally) new handler(s). If the developer specifies the new handler we describe the property as 'adaptivity'; if the system automatically determines the new handler, we describe the property as 'elasticity'.</td> </tr> <tr> <td>REPROVISION</td> <td>Transition into provisioning state, with (optionally) a new provisioning handler.</td> </tr> <tr> <td>RECOMPOSE</td> <td>Repeat the composition, with (optionally) a new composition handler.</td> </tr> </tbody> </table> TABLE II: CBT adaptation policies. C. Collectives The notion of “collective” in HDA-CAS community sometimes denotes a stable group or category of peers based on the common properties, but not necessarily with any personal/professional relationships (e.g., ‘Java developers’, ‘students’, ‘Vienna residents’); in other cases, the term refers to a team – a group of people gathered around a concrete task. The former type of collectives is more durable, whereas the latter one is short-lived. Therefore, we make following distinction in the programming model: Resident Collective (RC): is an entity defined by a persistent peer-store identifier, existing across multiple application executions, and possibly different applications. Resident collectives can also be created, altered and destroyed fully out of scope of the code managed by the programming model. The control of who can access and read a resident collective is enforced solely by the ‘peer-store’ (in our case the PeerManager component). For those resident collectives accessible from the given application, a developer can read/access individual collective members as well as all accessible attributes defined in the collective’s profile. When accessing or creating a RC, the programming model either passes to the peer store a query and constructs the corresponding object from returned peers, or passes an ID to get an existing peer-store (PeerManager) collective. In either case, in the background, the programming model will pass to the peer-store its credentials. The peer store then decides based on the privacy rules which peers to expose (return). For example, for the requested collective with ID ‘ViennaResidents’ we may get all Vienna residents who are willing to participate in a new (our) application, but not necessarily all Vienna residents from the peer-store’s DB. By default, the newly-created RC remains visible to future runs of the application that created it, but not to other applications. The peer-store can make them visible to other applications as well. At least one RC must exist in the application, namely the collective representing all peers visible to the application. Application-Based Collective (ABC): Differently than a resident collective, an ABC’s lifecycle is managed exclusively by the SmartSociety application. Therefore, an ABC cannot be accessed (i.e., is meaningless) outside of the application’s execution context. The ABCs are instantiated: a) implicitly – by the programming model libraries as intermediate products of different states of CBT execution (e.g., ‘provisioned’, ‘agreed’); or b) explicitly – by using dedicated collective manipulation operators to clone a resident collective or as the result of a set operation over existing Collectives. Also differently than resident collectives, ABCs are atomic and immutable entities for the developer, meaning that individual peers cannot be explicitly known or accessed/modified from an ABC instance. The ABCs embody the principle of collectiveness, making the collective an atomic, first-class citizen in our programming model, and encouraging the developer to express problem solutions in terms of collectives and collective-based tasks, rather than single activities and associated individuals. Furthermore, as collective members and execution plans are not known at design time, this enhances the general transparency and fairness of the virtual working environment, eliminating subjective bias. One of the reasons for introducing the concept of collectives with the described properties is to prevent the User/Developer from using individual human peers as mere computing/processing nodes being assigned activities to perform, instead favoring a more ethical (teamwork) approach. Furthermore, the distinction and existence of both RC and ABC Collective models (Fig. 4) allows a trade-off between hand-picking the team members and the flexibility offered between a platform-managed collective provisioned based on user’s requirements. The rationale in the latter case is similar to cloud computing – the user specifies the infrastructural requirements and constraints and the platform takes care to provision this infrastructure, without letting the user care about which particular VM instances are used and changed. Different use-cases, privacy and fairness policies may dictate or favor the choice of one Collective type over the other. For example, when assembling an input collective of experts for a CBT, the User may require to use as source the RC representing the peers with whom the User had positive previous experiences with. Although this seems like a reasonable request, over time the peer community might start exhibiting the characteristics of a scale-free network due to the preferential attachment method of choosing the collective members [11]. This, in turn, may lead to discouragement of less prominent peers, and in overall, increase the attrition rate [12]. To prevent this, the fairness policy of the application/platform enforced at the peer store may prevent handpicking of peers, and impose the use of ABCs provisioned transparently to the Developer/User in accordance with the fairness policy (e.g., round-robin or random peer assignment with reputation threshold). This is important for establishing attractive and competitive virtual crowd marketplaces [13]. IV. LANGUAGE CONSTRUCTS The functionality of the programming model is exposed through various associated language constructs constituting the SmarSociety Programming API. Due to space constraints, in this section we do not describe the full API, which is rather provided as a separate document [3]. Instead, we describe the supported groups of constructs and their functionality, and some representative individual methods. The examples in Sec. V-A showcase the use of these constructs. CBT instantiation: This construct allows instantiating CBTs of a given type, specifying the collaboration model, inputs (task request and input collective) as well as configuring or setting the non-default handlers. In order to offer a human-friendly and comprehensible syntax in conditions where many parameters need to be passed at once, we make use of the nested builder pattern to create a “fluent interface” [4] as exemplified in Listing 1. ``` Cbt cbt = ctx.getCbtBuilder("MyCBTType") .of(CollaborationType.OC) //Enum: OC, OD, OC_OD .forTaskRequest(t) .forInputCollective(c) .build(); ``` Listing 1: Instantiation of a CBT. CBT lifecycle operations: These constructs allow testing for the state of execution, and controlling how and when CBT state transitions can happen. Apart from getters/setters for individual CBT selection (state) flags, the API provides a convenience method that will set at once all flags to true/false: - `setAllTransitionsTo(boolean tf)` Since from the initial state we can transition into more than one state, for that we use the method: - `void start() - allows entering into provisioning or continuous_orchestration state (depending which of them is the first state). Non-blocking call.` Furthermore, CBT implements the Java 7 Future interface and preserves its semantics. This offers a convenient and familiar syntax to the developer, and allows easier integration of CBTs with legacy code. The Future API allows the developer to control and cancel the execution, and to block on CBT waiting for the result: - `TaskResult get() - waits if necessary for the computation to complete (until isDone() == true), and then retrieves its result. Blocking call.` - `TaskResult get(long timeout, TimeUnit unit) - same as above, but throwing appropriate exception if timeout expired before the result was obtained.` - `boolean cancel(boolean mayInterruptIfRunning) - attempts to abort the overall execution in any state and transition directly to the final fail-state. The original Java 7 semantics of the method is preserved.` - `boolean isCancelled() - Returns true if CBT was canceled before it completed. The original Java 7 semantics of the method is preserved." Listing 3 (3-5, 7, 16, 21, 28) shows the usage of some of the constructs. CBT collective-fetching operations: As explained in Sec. III-C during the CBT’s lifecycle multiple ABCs get created (‘input’, ‘provisioned’, ‘negotiables’, ‘agreed’). These constructs serve as getters for those collectives. At the beginning of CBT’s lifecycle, the return values of these methods are null. During the execution, the executing thread updates them with current values. Listing 2 (20-21) shows examples of these constructs. Collective manipulation constructs: These constructs allow instantiations of RCs by running the queries on the peer-store (PeerManager), or by creating local representations of already existing peer-store collectives with a well-known ID. We assume that the peer-store checks whether we are allowed to access the requested a collective, and filters out only those peers whose privacy settings allow them to be visible to our application’s queries. - `ResidentCollective.createFromQuery(PeerMgrQuery q, string to_kind) - Creates a collective by running a query on the PeerManager.` - `ResidentCollective.createFromID(string ID, string to_kind) - Creates a local representation of an already existing collective on the PeerManager, with a pre-existing ID." This group also contains methods for explicitly instantiating ABCs. Due to specific properties of ABCs (Sec. III-C), they can only be created through cloning or set operations from already existing collectives (both RCs and ABCs). These operations are performed in a way that preserves atomicity and immutability. Finally, a method for persisting the collectives at the peer-store is also provided. - `ABC copy( Collective from, [string to_kind] ) - Creates an ABC instance of kind to_kind. Peers from collective from --- 1. CBT cbt = ctx.getCbtBuilder("MyCBTType") 2. .of(CollaborationType.OC) //Enum: OC, OD, OC_OD 3. .forTaskRequest(t) 4. .forInputCollective(c) 5. .build(); 6. /... etc/ Listing 1: Instantiation of a CBT. 7. Password: SmartSocietyReviewer are copied to the returned ABC instance. If to_kind is omitted, the kind from collective from is assumed. - ABC join(Collective master, Collective slave, [string to_kind]) – Creates an ABC instance, containing the union of peers from Collectives master and slave. The resulting collective must be transformable into to_kind. The last argument can be omitted if both master and slave have the same kind. - ABC complement(Collective master, Collective slave, [string to_kind]) – Creates an ABC instance, containing the peers from Collective master after removing the peers present both in master and in slave. The resulting collective must be transformable into to_kind. The last argument can be omitted if both master and slave have the same kind. - void persist() – Persist the collective on peer-store. RCs are already persisted, so in this case the operation defaults to renaming. Listing 2 ((1-2, 19-22)) shows examples of these constructs. Collective-level communication: Programming model fully relies on our messaging and virtualization middleware SMARTCOM [7] developed for supporting the communication with peers and collectives. Programming model allows at the moment only a basic set of communication constructs, namely those for sending a message to a hybrid collective (Listing 3 (1-2-13)), and receiving responses from it. Message delivery is in line with individual privacy preferences. V. Evaluation A programming model can be evaluated both qualitatively and quantitatively. Quantitative analysis is usually performed once the associated domain-specific language (constructs) making use of the programming model is considered mature [14], since this type of evaluation includes measuring productivity and subjective satisfaction in an established community of regular users [15]. During the initial development and prototyping phase, the common approach is to use the qualitative evaluation instead [14], which, in general, can include: comparative case studies, analysis of language characteristics and monitoring/interviewing users. Analysis of language characteristics was chosen as the preferred method in our case. Comparative analysis was not applicable in this case, due to nonexistence of similarly expressive models, as shown in Section IV. In order to qualitatively evaluate the overall functionality of the programming model, we are currently integrating the programming model libraries into two existing SmartSociety platform applications tested in-field with human peers: a) a ride-sharing application SmartShare[9] and b) a hybrid, collective question-answering service AskSmartSociety[10]. As the two applications put focus on continuous orchestration and on-demand collaboration models, respectively, this exercise is a good indicator of the ability of the model to cover the advertised collaboration models. In addition, in order to qualitatively evaluate the API exposed to the developer we encoded a set of examples covering important use-cases derived from the set of real-world scenarios specifically elicited for the purposes of the SmartSociety project, and published here[11]. In the remainder of the section, we present an adapted selection of the encoded examples to illustrate the use of the fundamental language constructs. A. Examples Manipulating and using collectives: Consider an application that uses SmartSociety platform to assemble ad-hoc, on-demand programming teams to build software artifacts. For this purpose, two CBT types are assumed to be registered: “MyJavaProgrammingTask” and “MyJavaTestingTask”. First, the developer creates a RC javaDev containing all accessible Java developers from the peer-store. This collective is used as the input of the progTask CBT (4-10). progTask is instantiated as an on-demand collective task, meaning that the composition state will be skipped, since the execution plan in implied from the task request myImplementationTReq. The collective is first processed in the provisioning phase, where a subset of programmers with particular skills are selected and a joint code repository is set for them to use. The output of the provisioning state is the ‘provisioned’ collective, a CBT-built ABC collective, containing the selected programmers. Since it is atomic and immutable, the exact programmers which are members of the team are not known to the application developer. The negotiation pattern will select the first 50% of the provisioned developers into the ‘agreed’ collective that will ultimately execute the programming task. After the progTask’s this ABC becomes exposed to the developer, which uses it to construct another collective (19-22), containing Java developers from the ‘provisioned’ collective that were not selected into the ‘agreed’ one. This collective is then used to perform the second CBT testTask (31-37), which takes as input the output of the first CBT. Listing 2: Manipulating and using collectives. Controlling CBT execution: Listing 3 shows some examples of interacting with CBT lifecycle. An on-demand CBT --- named \texttt{cbt} is initially instantiated. For illustration purposes we make sure that all transition flags are enables (true by default), then manually set \texttt{do\_negotiate} to false, to force \texttt{cbt} to block before entering the \texttt{negotiation} state, and start the CBT (3-5). While CBT is executing, arbitrary business logic can be performed in parallel (7-10). At some point, the CBT is ready to start negotiations. At that moment, for the sake of demonstration, we dispatch the motivating messages (or possibly other incentive mechanisms) to the human members of the collective (12-14), and let the negotiation process begin. Finally, we block the main thread of the application waiting on the \texttt{cbt} to finish or the specified timeout to elapse (20-21), in which case we explicitly cancel the execution (28). ```java 1 CBT cbt = /*... assume on_demand = true ... */ 2 cbt.setAllTransitionsTo(true); //optional 3 cbt.setDoNegotiate(false); 4 cbt.start(); 5 6 while (cbt.isRunning() && !cbt.isWaitingForNegotiation()) { 7 //do stuff... 8 } 9 10 for (ABC negotiatingCol : cbt.getNegotiables()) { 11 negotiatingCol.send( 12 new SmartCom.Message("Please accept this task"); 13 // negotiateCol.applyIncentive("SOME\_INCENTIVE\_ID"); 14 } 15 cbt.setDoNegotiate(true); 16 17 TaskResult result = null; 18 try { 19 //blocks until done, but max 5 hours: 20 result = cbt.get(5, TimeUnit.HOURS); 21 /* ... do something with result ... */ 22 } catch (TimeoutException ex) { 23 if (cbt.getCollectiveAgreed() != null){ 24 cbt.getCollectiveAgreed().send( 25 new SmartCom.Message("Thank you anyway, but too late."); 26 } 27 } 28 cbt.cancel(true); 29 } 30 //... ``` Listing 3: Controlling CBT’s lifecycle. ### VI. RELATED WORK Here we present an overview of relevant classes of socio-technical systems, their typical representatives, and compare their principal features with the SmartSociety programming model. Based on the way the workflow is abstracted and encoded the existing approaches can be categorized into three groups [5]: a) programming-level approaches; b) parallel-computing approaches; and c) process modeling approaches. Programming level approaches focus on developing a set of libraries and language constructs allowing general-purpose application developers to instantiate and manage tasks to be performed on socio-technical platforms. Unlike SmartSociety, the existing systems do not include the design of the crowd management platform itself, and therefore have to rely on external (commercial) platforms. The functionality of such systems is effectively limited by the design of the underlying platform. Typical examples of such systems are TurKit [16], CrowdDB [17] and AutoMan [4]. TurKit is a Java library layered on top of Amazon’s Mechanical Turk offering an execution model (“crash-and-rerun”) which re-offers the same microtasks to the crowd until they are performed satisfactorily. While the deployment of tasks onto the Mechanical Turk platform is automated, the entire synchronization, task splitting and aggregation is left entirely to the programmer. Unlike SmartSociety, the inter-worker synchronization is out of programmer’s reach. The only constraint that a programmer can specify is to explicitly prohibit certain workers to participate in the computations. No other high-level language constructs are provided. CrowdDB outsources parts of SQL queries as mTurk microtasks. Concretely, the authors extend traditional SQL with a set of “crowd operators”, allowing subjective ordering or comparisons of datasets by crowdsourcing these tasks through conventional micro-task platforms. From the programming model’s perspective, this approach is limited to a predefined set of functionalities which are performed in a highly-parallelizable and well-know manner. AutoMan integrates the functionality of crowdsourced multiple-choice question answering into the Scala programming language. The authors focus on automated management of answering quality. The answering follows a hardcoded workflow. Synchronization and aggregation are centrally handled by the AutoMan library. The solution is of limited scope, targeting only the designated labor type. Neither of the three described systems allows explicit collective formation, or hybrid collective composition. Parallel computing approaches rely on the divide-and-conquer strategy that divides complex tasks into a set of subtasks solvable either by machines or humans. Typical examples include Turkomatic [18] and Jabberwocky. For example, Jabberwocky’s \texttt{ManReduce} collaboration module requires users to break down the task into appropriate map and reduce steps which can then be performed by a machine or by a set of human workers. Hybridity is supported at the overall workflow level, but individual activities are still performed by homogeneous teams. In addition, the efficacy of these systems is restricted to a suitable (e.g., MapReduce-like) class of parallelizable problems. Also, in practice they rely on existing crowdsourcing platforms and do not manage the workforce independently, thereby inheriting all underlying platform limitations. The process modeling approaches focus on integrating human-provided services into workflow systems, allowing modeling and enactment of workflows comprising both machine and human-based activities. They are usually designed as extensions to existing workflow systems, and therefore can perform certain peer management. The currently most advanced systems are CrowdLang [3], CrowdSearcher [4] and CrowdComputer [5]. CrowdLang brings in a number of novelties in comparison with the previously described systems, primarily with respect to the collaboration synthesis and synchronization. It enables users to (visually) specify a hybrid machine-human workflow, by combining a number of generic (simple) collaborative patterns (e.g., iterative, contest, collection, divide-and-conquer), and to generate a number of similar workflows by differently recombining the constituent patterns, in order to generate a more efficient workflow at runtime. The use of human workflows also enables indirect encoding of inter-task dependencies. The user can influence which workers will be chosen for performing a task by specifying a predicate for each subtask that need to be fulfilled. The predicates are also used for specifying a limited number of constraints based on social relationships, e.g., to consider only Facebook friends. CrowdSearcher presents a novel task model, composed of a number of elementary crowdsourcable operations (e.g., label, like, sort, classify, group), associated with individual human workers. Such tasks are composable into arbitrary workflows, through application of a set of common collaborative patterns which are provided. This allows a very expressive model but on a very narrow set of crowdsourcing-specific scenarios. This is in full contrast with the more general task-agnostic approach taken by the SmartSociety programming model. The provisioning is limited to the simple mapping “1 microtask ↔ 1 peer”. No notion of collective or team is not explicitly supported, nor is human-driven orchestration/negotiation. Finally, CrowdComputer is a platform allowing the users to submit general tasks to be performed by a hybrid crowd of both web services and human peers. The tasks are executed following a workflow encoded in a BPMN-like notation called BPMN4Crowd, and enacted by the platform. While CrowdComputer assumes splitting of tasks and assignment of single tasks to individual workers through different ‘tactics’ (e.g., marketplace, auction, mailing list) SmartSociety natively supports actively assembling hybrid collectives to match a task. In addition, by providing a programming abstraction, SmartSociety offers a more versatile way of encoding workflows. VII. CONCLUSIONS & FUTURE WORK In this paper we presented the programming model of SmartSociety — an HDA-CAS platform supporting collaborative computations performed by hybrid collectives, composed of software and human-based services. The platform is able to host user-provided applications, managing collaborative computations on their behalf. Even if related systems allow a certain level of runtime workflow adaptability, they are limited to patterns that need to be foreseen at design-time [VI]. SmartSociety differs from these systems by extending the support for collaborations spanning from processes known at design-time to fully human-driven, ad-hoc runtime workflows. The spectrum of supported collaboration models and runtime workflow adaptability are exposed through the newly introduced “CBT” and “Collective” constructs. The two constructs represent the principal contribution of this paper. The CBT is task-agnostic, delegating the responsibility of providing a mutually-interpretable task description to the developer, which allows the construct to be generally applicable for the entire class of work activities supported by the platform. Under the hood of CBT, the programming model offers advanced composition of execution plans, coordination of the negotiation process and virtualization of peers. The Collective construct, coming in two flavors (RCs and ABCs) highlights the collective aspect of the task execution and prevents assigning individuals to workflow activities. At the same time, it allows the platform to enforce desired privacy and fairness policies, and prevents exploiting human peers as individual processing nodes. Using the associated API, developers can make use of the two constructs and and leave it to the platform’s runtime to provision the collectives, orchestrate the negotiation and agreement between human peers and ultimately perform the task collaboratively. At the moment, a number of simple adaptation strategies are also supported. All these phases are handled transparently to the developer. The API was designed to be comprehensive and familiar and to integrate well with legacy (Java) code. Currently, the programming model has been qualitatively validated. Future work will see the full implementation and validation of the programming model in real-world experiments, once the full integration of all project-developed components has been performed. Talks are currently under way to run these tests using in municipalities of Northern Italy and Israel. ACKNOWLEDGMENT Supported by EU FP7 SmartSociety project, grant 600854. REFERENCES
{"Source-Url": "http://dsg.tuwien.ac.at/staff/truong/publications/2015/truong-cic2015-smartsoc.pdf", "len_cl100k_base": 10519, "olmocr-version": "0.1.53", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 33330, "total-output-tokens": 12386, "length": "2e13", "weborganizer": {"__label__adult": 0.000255584716796875, "__label__art_design": 0.00025582313537597656, "__label__crime_law": 0.0002123117446899414, "__label__education_jobs": 0.0006318092346191406, "__label__entertainment": 5.2809715270996094e-05, "__label__fashion_beauty": 0.00010782480239868164, "__label__finance_business": 0.00018775463104248047, "__label__food_dining": 0.00025653839111328125, "__label__games": 0.0004646778106689453, "__label__hardware": 0.0006518363952636719, "__label__health": 0.0003070831298828125, "__label__history": 0.00018894672393798828, "__label__home_hobbies": 6.878376007080078e-05, "__label__industrial": 0.00028586387634277344, "__label__literature": 0.00018703937530517575, "__label__politics": 0.0002168416976928711, "__label__religion": 0.0003304481506347656, "__label__science_tech": 0.01403045654296875, "__label__social_life": 9.000301361083984e-05, "__label__software": 0.006572723388671875, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00020992755889892575, "__label__transportation": 0.0004093647003173828, "__label__travel": 0.00017178058624267578}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 57671, 0.04307]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 57671, 0.31273]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 57671, 0.8973]], "google_gemma-3-12b-it_contains_pii": [[0, 5056, false], [5056, 8999, null], [8999, 14398, null], [14398, 19953, null], [19953, 27116, null], [27116, 32413, null], [32413, 38598, null], [38598, 43817, null], [43817, 50391, null], [50391, 57671, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5056, true], [5056, 8999, null], [8999, 14398, null], [14398, 19953, null], [19953, 27116, null], [27116, 32413, null], [32413, 38598, null], [38598, 43817, null], [43817, 50391, null], [50391, 57671, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 57671, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 57671, null]], "pdf_page_numbers": [[0, 5056, 1], [5056, 8999, 2], [8999, 14398, 3], [14398, 19953, 4], [19953, 27116, 5], [27116, 32413, 6], [32413, 38598, 7], [38598, 43817, 8], [43817, 50391, 9], [50391, 57671, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 57671, 0.08738]]}
olmocr_science_pdfs
2024-12-11
2024-12-11
f49bd5f9496d616cd969f24f50c100d899a9d567
Reducing Interpolant Circuit Size by Ad Hoc Logic Synthesis and SAT-Based Weakening Original Reducing Interpolant Circuit Size by Ad Hoc Logic Synthesis and SAT-Based Weakening / Cabodi, Gianpiero; Camurati, Paolo Enrico; Palena, Marco; Pasini, Paolo; Vendraminetto, Danilo. - ELETTRONICO. - (2016), pp. 25-32. (Intervento presentato al convegno Formal Methods in Computer-Aided Design tenutosi a Mountain View, California, USA nel October 3 - 6, 2016) [10.1109/FMCAD.2016.7886657]. Availability: This version is available at: 11583/2654916 since: 2020-07-07T12:10:38Z Publisher: IEEE Published DOI:10.1109/FMCAD.2016.7886657 Terms of use: This article is made available under terms and conditions as specified in the corresponding bibliographic description in the repository Publisher copyright IEEE postprint/Author's Accepted Manuscript ©2016 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collecting works, for resale or lists, or reuse of any copyrighted component of this work in other works. (Article begins on next page) Reducing Interpolant Circuit Size by Ad-Hoc Logic Synthesis and SAT-Based Weakening G. Cabodi, P. E. Camurati, M. Palena, P. Pasini, D. Vendraminetto Dipartimento di Automatica ed Informatica Politecnico di Torino - Turin, Italy Email: {gianpiero.cabodi, paolo.camurati, marco.palena, paolo.pasini, danilo.vendraminetto}@polito.it Abstract—We address the problem of reducing the size of Craig interpolants used in SAT-based Model Checking. Craig interpolants are AND-OR circuits, generated by post-processing refutation proofs of SAT solvers. Whereas it is well known that interpolants are highly redundant, their compaction is typically tackled by reducing the proof graph and/or by exploiting standard logic synthesis techniques. Furthermore, strengthening and weakening have been studied as an option to control interpolant quality. In this paper we propose two interpolant compaction techniques: (1) A set of ad-hoc logic synthesis functions that, revisiting known logic synthesis approaches, specifically address speed and scalability. Though general and not restricted to interpolants, these techniques target the main sources of redundancy in interpolant circuits. (2) An interpolant weakening technique, where the UNSAT core extracted from an additional SAT query is used to obtain a gate-level abstraction of the interpolant. The abstraction introduces fresh new variables at gate cuts that must be quantified out in order to obtain a valid interpolant. We show how to efficiently quantify them out, by working on an NNF representation of the circuit. The paper includes an experimental evaluation, showing the benefits of the proposed techniques, on a set of benchmark interpolants arising from both hardware and software model checking problems. I. INTRODUCTION Craig interpolants (ITPs) [1], introduced by McMillan [2] in the Unbounded Model Checking (UMC) field, have shown to be effective on difficult verification instances. From a Hardware Model Checking perspective, Craig interpolation is an operator able to compute over-approximated images. The approach can be viewed as an iterative refinement of proof-based abstractions, to narrow down a proof to relevant facts. Over-approximations of the reachable states are computed from refutation proofs of unsatisfied Bounded Model Checking--like runs, in terms of AND-OR circuits, generated in linear time and space, w.r.t. the proof. From the perspective of Software Model Checking, instead, interpolants are used to strengthen the results of predicate abstraction [3]. In case the inductive invariant representing a program is insufficient to prove a given property, interpolants can be used as predicates to refine such an abstraction [4]. The most interesting features of Craig interpolants are their completeness and the fact can be used as an automated abstraction mechanism, whereas one of their major drawbacks is the inherent redundancy of interpolant circuits, as well as the need for fast and scalable techniques to compact them. Improvements over the base method [2] were proposed in [5], [6], [7], [8] and [9], in order to push forward applicability and scalability of the technique. Craig interpolants can be computed as AND-OR circuits, generated by post-processing refutation proofs of SAT solvers. Modern SAT solvers are capable, without incurring into large additional cost, to generate a resolution proof from unsatisfiable runs [10]. Due to the nature of the algorithms employed by SAT solvers, a resolution proof may contain redundant parts and a strictly smaller resolution proof can be obtained. Although a Craig interpolant is linear in the proof size, the proof itself may be large and highly redundant. SAT solvers are not usually targeted to produce proofs of minimal size, therefore they may be deemed ultimately responsible for Craig interpolant size and redundancy. This is the main reason why most efforts on interpolant size reduction have been addressed as SAT solver improvement and/or proof reduction. A. Contributions In this paper we propose a fast and scalable logic synthesis approach, as well as a novel interpolant weakening (and strengthening) technique that also addresses circuit compaction. The main contributions are thus two interpolant compaction techniques: - A set of ad-hoc logic synthesis functions specifically addressing speed and scalability. Though general and not limited to interpolants, they target the main sources of redundancy in interpolant circuits; - An interpolant weakening technique, where an additional SAT query is performed in order to obtain a gate-level abstraction of the interpolant. Although fresh new variables are introduced at gate cuts, clearly outside the set of shared symbols, we show how to quantify them out by working on an NNF encoding of the circuit. B. Related works Interpolant compaction has been addressed in [11] and [12]. With respect to [11], we present additional techniques addressing scalability and interpolant compaction by weakening/strengthening. Interpolant weakening/strengthening is the subject of many papers, with little relation with our work. Among them, we consider [13] for an interesting discussion on the relationship between interpolant strength and quality. The notion of dominance between nodes of a directed graph is central in this work. Dominators have been used in the context of logic synthesis before, such as [14], [15]. C. Outline Section II introduces background notions and notation about Boolean circuits, Craig interpolants, gate-level abstraction and circuit compaction techniques. Section III describes the proposed ad-hoc logic synthesis functions, whereas our interpolant weakening technique is illustrated in Section IV. Section V presents and discusses the experiments we performed. Finally, Section VI concludes with some summarizing remarks. II. BACKGROUND A. Combinational Boolean Circuits Definition 1. A Boolean circuit (or network) is a directed acyclic graph \( G = (V, E) \), where a node \( v \in V \) represents either a logic gate, a primary input (PI) or a primary output (PO) of the circuit and each directed edge \( (u, v) \in E \) represents a signal in the circuit connecting the output of node \( u \) to an input of node \( v \). The fanin (fanout) of a node is the set of incoming (outgoing) edges of that node. Primary inputs are nodes with no fanout, whereas primary outputs are nodes with no fanin. Every logic gate \( v \in V \) is associated with a Boolean function \( f_v : \mathbb{B}^n \rightarrow \mathbb{B} \), where \( n \) is its number of inputs. The fanin (fanout) sets are typically represented by lists. With abuse of notation we use the terms fanin and fanout to identify both edges and the related sets of adjacent nodes. Given a gate node \( v \), \( \text{type}(v) \) is used to indicate the type of logic function associated with \( v \) (AND, OR, NOT, etc.). Definition 2. Given a circuit \( G = (V, E) \), a node \( v \) dominates \( a \) node \( v \) iff every path from \( v \) to any of the primary outputs of \( G \) contains \( u \). A node \( u \) that dominates a node \( v \) is called a dominator of \( v \). Definition 3. Given a circuit \( G = (V, E) \) and a node \( r \), a cone \( C = (V_C, E_C) \) rooted in \( r \) is a sub-graph of \( G \) consisting of \( r \) and some of its non-primary input predecessors such that any node in \( C \) has a path to \( r \) that lies entirely in \( C \). The fanin (fanout) of a cone is the number of nodes \( u \) not in \( C \) that are inputs (outputs) of a node \( t \) in \( C \). Node \( r \) is called root of the cone \( C \), and denoted by \( \text{root}(C) \), non-root nodes of the cone are called internal nodes, whereas nodes in the fanin of the cone are called cut nodes of \( C \) and denoted by \( \text{cut}(C) \). Nodes of \( C \) that have at least one cut node \( v \) in their fanin are called entry points in \( C \) for \( v \). The Boolean function \( f_v \) associated with the cone root is called cone function. With abuse of notation we sometimes use \( v \in C \) to mean that \( v \in V_C \). Definition 4. A cluster is a cone \( C \) rooted in \( r \) such that, for each node \( v \) in \( C \), \( v \) has unit fanout and is dominated by \( r \) in \( G \). Note that cut nodes of a cluster \( C \) are either a PI or fanout branches, and the root \( r \) of \( C \) is either a PO or a fanout stem. Note also that the sub-graph of the circuit that defines a cluster \( C \) is a tree. Given a node \( v \in C \), every successor \( u \) of \( v \) in \( C \) is a dominator of \( v \) in \( G \). Definition 5. A macrogate is a cluster \( M \) such that every node \( v \) in \( M \) represents the same associative Boolean function. An OR-macrogate (AND-macrogate) is a macrogate composed of logical disjunction (conjunction) nodes. The definitions provided for cones are naturally extended to clusters and macrogates. An example of clusters and macrogates appears in Figure 1, where one cluster includes one OR- and two AND-macrogates. Fig. 1: A subcircuit partitioned in clusters (enclosed by a blue dashed line) and macrogates (enclosed by a dotted red line). Definition 6. Given a cone \( C \) rooted in \( r \) and a variable \( a \in \text{cut}(C) \), variable \( a \) is not observable on \( f_r \) iff \( f_r(X, \perp) \equiv f_r(X, \top) \), with \( X = \text{cut}(C) \setminus \{a\} \). A literal is either a Boolean variable or its negation. A clause is a disjunction of literals. A Boolean formula \( F \) is in Conjunctive Normal Form (CNF) if it is a conjunction of clauses. Given a Boolean formula \( F \), we denote with \( \text{supp}(F) \) the set of Boolean variables over which \( F \) is defined. A Boolean formula \( F \) is in Negation Normal Form (NNF) if the negation operator (¬) is only applied to its variables, and the only other operators allowed are conjunction (∧) and disjunction (∨). Any formula can be transformed to NNF in linear time through direct application of De Morgan’s laws and the elimination of double negations. In the worst case, the size of the circuit implementing a formula \( F \) might double when \( F \) is transformed into NNF. B. Craig Interpolants Let \( A \) and \( B \) be two inconsistent Boolean formulas, i.e., such that \( A \land B \equiv \bot \). A Craig interpolant \( I \) for \( (A, B) \) is a formula such that: (1) \( A \Rightarrow I \), (2) \( I \land B \equiv \bot \), and (3) \( \text{supp}(I) \subseteq \text{supp}(A) \cap \text{supp}(B) \). We use ITP to denote the interpolation operation. An interpolant \( I = \text{ITP}(A, B) \) can be derived, as an AND-OR circuit, from the refutation proof of \( A \land B \). Most modern SAT solvers are capable of producing resolution proofs. A resolution proof provides evidence of unsatisfiability for a CNF formula $F$ as a series of applications of the binary resolution inference rule. Given two clauses $C_1 = (l \lor l_1 \lor ... \lor l_n)$ and $C_2 = (\neg l \lor l'_1 \lor ... \lor l'_m)$, a resolvent $C$ is computed using a resolution operator, defined as: $C = \text{Res}(C_1, C_2) = (l_1 \lor ... \lor l_n \lor l'_1 \lor ... \lor l'_m)$. Starting from the clauses of $F$, such a rule is applied until the empty clause is derived. Craig interpolants are generated from resolution proofs as described in [2]. The resulting ITP circuit is isomorphic to the proof: where original clauses are translated as either OR gates or constants and resolutions steps are translated as either AND or OR gates. Interpolants in the range between $A$ and $\neg B$ depend on SAT solver decisions, thus their resulting strength/weakness is not under user control. This motivated research on ex-post interpolant strengthening/weakening. C. Combinational Circuit Compaction This subsection briefly overviews, without any claim of completeness and generality, some combinational synthesis techniques our circuit compaction approach is based upon. Redundancies affecting non canonical combinational circuits are removed by structural hashing, cut-based [16], BDD-based [17] and SAT-based [18] sweeping. The above methods basically rely on finding and merging classes of functionally equivalent circuit nodes. Other reduction efforts exploit various decomposition, rewriting and balancing strategies. In [19] a mix of locally canonical transformations and DAG-aware rewritings on technologically independent circuits have been first proposed. [14] introduces a technique for preprocessing combinational logic before technology mapping. We follow [14] in its use of And-Inverter Graphs (AIGs), composed of two-input ANDs and inverters. Scalability is achieved by making all operations local, and moving to a global scope by iterated application of local reductions. The result is that the cumulative effect of several rewriting steps is often superior to traditional synthesis in terms of quality. Redundancy removal under Observability Don’t Cares (ODCs) is a powerful variant of redundancy removal, where node equivalences are established taking into account their observability at circuit outputs. All ODC-based approaches rely on a computation of don’t care conditions for nodes involved in redundancy checks. As exact computation is prohibitively expensive, approximate techniques have been proposed. BDD-based Compatible Observability Don’t Care (CODC) sets were computed in SIS [21]. Approximated ODCs (by “windowing”) were introduced in [22], where scalability was achieved by restricting the sub-circuit environment to a locality. SAT-based quantifier elimination [23], augmented with random sampling, is a further attempt to exploit the power of SAT solvers. D. Gate-Level Abstraction Abstraction techniques are a well known area of research in Model Checking. Our paper is related to a form of localization abstraction [24] called Gate-Level Abstraction [25]. Abstraction by localization is based on removing circuit components (i.e. cutting wires) not necessary for a proof. Detection of unnecessary parts has been proposed following two main schemes: - Counterexample-Based Abstraction-refinement (CBA) [26], where an initially weak abstraction is iteratively refined (strengthened) based on spurious counterexample analysis; - Proof Based Abstraction (PBA), exploiting the ability of modern SAT solvers to generate proofs of unsatisfiability, is a more recently followed variant, investigated in standalone mode or combined with CBA, as in [27]. In most model checkers, localization is done at register boundaries. Gate-Level Abstraction [25] is a particular abstraction scheme (compatible in principle with both CBA and PBA strategies), where localization is done at gate nodes. III. INTERPOLANTS COMPACTIO N BY AD-HOC LOGIC SYNTHESIS In this section we present a set of procedures to reduce the size of Boolean circuits, based on local simplification techniques arising from logic synthesis. Although applicable to any Boolean circuit, our approach specifically targets the main sources of redundancy of interpolant circuits: gates that can be replaced by a constant value, or sub-circuits that can be merged being functionally equivalent (though topologically distinct). We consider an interpolant as a single-output circuit $G$. Starting, from an AIG representation of the circuit, we: - Identify AND and OR gates; - Partition $G$ into a set of maximal clusters; - Group trees of AND (resp. OR) gates in macrogates. Our target is to address gate redundancies by fast operations, where circuit transformations are performed within clusters. The reason for limiting our scope to clusters is related to the fact that fanout stems propagate shared subformulas through different paths within the circuit graph. Simplifications affecting multiple fanout paths are both complex and of limited impact. The circuit $G$ is partitioned into a maximal set of clusters, each of which is in turn partitioned into a set of macrogates. This is done by means of a depth-first visit of $G$ starting from its root node $r$. Each node $v$ is associated with two pieces of information: its cluster dominator, $\text{dom}C(v)$, and its macrogate dominator, $\text{dom}G(v)$. As long as the visited nodes have unit fanout, cluster dominator information in propagated. As long as the visited nodes have unit fanout and are of the same type, macrogate dominator information in propagated. Performing such an operation requires $O(|E|)$ time. We thus propose a procedure based on two kinds of local simplifications: - Redundancy removal (gates equivalent to a constant) based on ODC-like implications within clusters. - Enforcement of sub-formula sharing (equivalent gates merging) through macrogate refactoring. A. ODC Implications Removal The first simplification technique we propose aims at finding local ODC implications that can be exploited to replace a gate with a constant. Such a technique relies on the following two identities: \[ f(X, a) = a \land g(X, a) \equiv a \land g(X, \top) \] \[ f(X, a) = a \lor g(X, a) \equiv a \lor g(X, \bot) \] Let us consider a Boolean function \( f(X, a) \) expressed as the conjunction (resp. disjunction) of a variable \( a \) and a function \( g \) of \( a \). Then \( a \) can be replaced by the \( \top \) (resp. \( \bot \)) constant in \( g \). Note that the instance of variable \( a \) in the support of \( g \) is not observable on \( f \). From a circuit graph perspective, given \( G \) implementing \( f \), \( a \) is an input variable and \( g \) is a subcircuit of \( G \) with \( a \) in its fanin. There are at least two re-convergent paths from node \( a \) to the output node of \( f \). We call such cases ODC implications for \( f \), as the implications \( f \rightarrow a \) and \( \neg a \rightarrow \neg f \) (resp. \( \neg f \rightarrow \neg a \) and \( a \rightarrow f \)) dually hold in each of the two respective cases. We exploit the notion of ODC implications to perform local simplification of functions in the Boolean circuit. This is done by detecting cones \( C \) in the circuit whose function can be expressed as either \( a \land g(X, a) \) or \( a \lor g(X, a) \). In these cases, \( C \) can be simplified by disconnecting the redundant edge from \( a \) to its entry point in \( C \) and injecting a constant. Detection of ODC implications is restricted at macrogate and/or cluster boundaries in order to avoid problems arising from shared elements. We consider both direct ODC implications and transitive ODC implications. Direct ODC implications arise when the input of a function \( f \) is directly implied by \( f \). Figure 2 exemplifies a direct ODC implication. Input \( b \) is a direct ODC implication for \( f_1 \) since \( f_1(a, b, c) = b \land g(a, b, c) \) with \( g(a, b, c) = c \lor (a \lor b) \), and therefore \( f_1 \rightarrow b \). Transitive ODC implications occur when a function of \( f \) is transitively implied by \( f \) through another of its inputs. Figure 3 provides an example of transitive ODC implication. Input \( b \) is a transitive ODC implication for \( f_1 \), in fact, \( d \) is a direct ODC implication for \( f_2 \) and \( b \) is a direct ODC implication of \( f_d \), therefore, \( f_t \rightarrow d \rightarrow b \). ![Fig. 3: Example of transitive ODC implication.](image-url) **Algorithm 1. DIRECTOdcSimplify** The DIRECTOdcSimplify procedure (Algorithm 1) tries to identify cluster inputs that are made redundant by direct ODC implications. Given a cluster \( C \) rooted in \( r \) and one of its inputs \( v \), the algorithm tries to find a node \( d \) in \( C \) such that \( v \) is a direct ODC implication for \( f_d \). Considering the cluster as a tree of macrogates, this corresponds to finding a common successor \( d \) for two of the entry points of \( v \) in \( C \), called \( u \) and \( t \), so that \( d \) is a direct successor of either \( u \) or \( t \). Since we are considering a tree of macrogates, \( d \) being a direct successor of \( t \) means that \( t \) is connected to \( d \) through either a chain of only AND or OR gates. For each cluster \( C_i \), the algorithm scans each of its cut nodes. For each \( v \in \text{cut}(C_i) \), every pair \( u, t \) of distinct entry points of \( v \) in \( C_i \) is considered. In order to find a common successor for \( u \) and \( t \), first each macrogate dominator of \( u \) is marked by the procedure MARKDOMINATORS. Then, the algorithm checks if the macrogate dominator of \( t \) is marked. If that is the case, being \( d = \text{domG}(t) \), we have either \( f_d(X, v) = v \land g(X, v) \lor f_d(X, v) = v \lor g(X, v) \) for some \( g \). Therefore, \( v \) in \( g \) is not observable on \( f_d \) and the circuit can be simplified by calling function SIMPLIFY. Such a function takes a couple of nodes and a gate type as arguments, removes the edge \((v, u)\) from the circuit and injects an appropriate constant value in the newly created free input. The injected constant is \( \top \) if the gate type passed as argument is AND, \( \bot \) if is OR. After injecting the constant, the circuit is simplified accordingly. Otherwise, if \( \text{domG}(t) \) is not marked, the algorithm proceeds with the next pair of entry points. Time complexity of DIRECTOdcSimplify is \( O(|V| \max \{|\text{cut}(C_i)|\}) \). **Algorithm 1. DIRECTOdcSimplify** 1: for all clusters \( C_i \in \mathcal{G} \) do 2: for all nodes \( v \) in \( \text{cut}(C_i) \) do 3: for all pair \((u, t)\) in fanout(\(v\) \(\cap\) \(C_i\) with \(u \neq t\) do 4: MARKDOMINATORS(\(u\)) 5: if \( \text{domG}(t) \) is marked then 6: SIMPLIFY(\(v, u, \text{type}(t)\)) 7: UNMARKDOMINATORS(\(u\)) The TRANSITIVEOdcSimplify procedure (Algorithm 2) tries to identify cluster inputs that are made redundant by transitive ODC implications. Two lists are maintained for each cluster: a direct implication list and a transitive implication list. Given a cluster \( C \) rooted in \( r \), its direct implication list, denoted as \( \text{Impl}(C) \), contains all cluster inputs \( v \) for which at least one of the entry points of \( v \) in \( C \) has \( r \) as macrogate dominator. Therefore, for each \( v \in \text{Impl}(C) \) either \( f_r \rightarrow v \), if \( \text{type}(r) \) is ![Fig. 2: Example of direct ODC implication.](image-url) AND, or \( \neg f_r \rightarrow \neg v \), if \( \text{type}(r) \) is OR. Direct implication lists are provided as an argument to \textsc{TransitivODC}S\textsc{implify}. Transitive implication lists, denoted as \( \text{Trans}(C) \), are used to collect those nodes \( v \) for which there exists a sequence of clusters \( C_0, \ldots, C_n \) such that the following conditions hold: - \( C_{i+1} \in \text{Impl}(C_i) \) for each \( 0 \leq i < n \); - \( \text{type}(C_{i+1}) = \text{type}(C_i) \) for each \( 0 \leq i < n \); - \( v \notin \text{Impl}(C_i) \) for \( 0 \leq i < n \); - \( v \in \text{Impl}(C_n) \). Transitive implication lists are computed while \textsc{TransitivODC}S\textsc{implify} runs and used to detect transitive ODC implications w.r.t. the root of each cluster. In \textsc{TransitivODC}S\textsc{implify} clusters are scanned in topological order. For each cluster \( C_i \), its transitive implication list is first computed. This is done by conjoining the current \( \text{Trans}(C_i) \) with every node that is either in the transitive or direct implication list of the clusters that are in \( \text{Impl}(C_i) \) and are of the same type of \( C_i \). Once the transitive implication list for \( C_i \) has been computed, the procedure scans each node \( v \in \text{cut}(C_i) \) that is in \( \text{Trans}(C_i) \). These nodes are inputs of \( C_i \), for which a transitive ODC implication exists (through some of the other inputs of \( C_i \)). Therefore, each entry point \( u \) of these nodes can be simplified by calling \textsc{Simplify}. Time complexity of Algorithm 2 depends on the size of the transitive lists: \( O(|V| \max \{|\text{Trans}(C_i)|\}) \). Although the sizes of such lists, in the worst case, could be quadratic in the number of nodes, experimentally it is possible to notice that in our context of application the size of these lists stays within \( O(|V|) \). <table> <thead> <tr> <th>TransitiveODCImplify(( \mathcal{G} ), ( \text{Impl} ))</th> </tr> </thead> <tbody> <tr> <td>1: for all clusters ( C_i \in \mathcal{G} ) in topological order do</td> </tr> <tr> <td>2: ( \text{Trans}(C_i) \leftarrow \emptyset )</td> </tr> <tr> <td>3: for all clusters ( C_k \in \text{Impl}(C_i) ) do</td> </tr> <tr> <td>4: for all ( v ) in ( \text{Trans}(C_k) \cup \text{Impl}(C_k) ) do</td> </tr> <tr> <td>5: if ( \text{type}(C_k) = \text{type}(C_i) ) then</td> </tr> <tr> <td>6: ( \text{Trans}(C_i) \leftarrow \text{Trans}(C_i) \cup {v} )</td> </tr> <tr> <td>7: for all nodes ( v ) in ( \text{cut}(C_i) ) do</td> </tr> <tr> <td>8: if ( v ) in ( \text{Trans}(C_i) ) then</td> </tr> <tr> <td>9: for all node ( u ) in ( \text{fanout}(v) \cap C_i ) do</td> </tr> <tr> <td>10: \text{Simplify}(v, u, \text{type}(C_i))</td> </tr> </tbody> </table> Algorithm 2. \textsc{TransitivODC}S\textsc{implify}(\( \mathcal{G} \), \( \text{Impl} \)) B. Macrogate Refactoring The second simplification approach we propose tries to refactor portions of the circuit implementing the same type of Boolean function in order to explicit sub-functions implemented by nodes already present in the circuit. If successful, sharing can be enforced to reduce the overall size of the circuit. This technique is applied to macrogates in order to guarantee that each node removed by means of refactoringization has unit fanout and thus the size of the circuit actually decreases. As an example, consider an AND-macrogate in Figure 4, implementing the function \( f_i(a, b, c, d) = (a \land b) \land (c \land d) \). The idea is to identify a couple of inputs \((i, j)\), such that the node realizing \( i \land j \) does not appear in the macrogate but it exists in a different point of the circuit. Suppose a node \( m \) implementing \( f_m = c \land b \) exists, the macrogate function \( f_i \) can be refactored as \( f_i(a, b, c, d) = m \land (a \land d) \) so that the gate \( m \) can be shared. The final result of such a step of refactoring is a reparenthesization of the original macrogate function, for which the number of nodes decreases by one, one being now shared. A similar reasoning applies to OR-macrogates as well. Note that refactoring a macrogate may change the current circuit partitioning as a previously non-shared node becomes shared. The \textsc{MacrogateRefactor} procedure (Algorithm 3) tries to refactor macrogates of the circuit in order to enforce better sharing. For each macrogate \( M_i \), first its cut nodes are marked. Then, for each input node of \( M_i \), the procedure scans all the nodes in its fanout list that do not appear in \( M_i \) but are of the same type. Those nodes \( u \) are gates of the same type of \( M_i \) that share an input with \( M_i \). For each of those nodes, the algorithm checks whether its other input node is shared with \( M_i \), by testing if such a node is marked. In such a case, \( M_i \) can be refactored to enforce sharing with \( u \). Function \textsc{Refactor} handles macrogate refactoring. It also updates any other macrogate that could have been affected by the refactoring. Time complexity of \textsc{MacrogateRefactor} is \( O(|V| \max \{|\text{fanout}(u)|\}) \). <table> <thead> <tr> <th>MacrogateRefactor(( \mathcal{G} ))</th> </tr> </thead> <tbody> <tr> <td>1: for all macrogate ( M_i \in \mathcal{G} ) do</td> </tr> <tr> <td>2: Mark nodes in ( \text{cut}(M_i) )</td> </tr> <tr> <td>3: for all ( v ) in ( \text{cut}(M_i) ) do</td> </tr> <tr> <td>4: for all ( u ) in ( \text{fanout}(v) ) do</td> </tr> <tr> <td>5: if ( \text{dom}(v) \neq \text{dom}(u) ) and ( \text{type}(v) = \text{type}(u) ) then</td> </tr> <tr> <td>6: if ( \text{left}(u) \neq v ) and ( \text{left}(u) ) is marked then</td> </tr> <tr> <td>7: \text{Refactor}(M_i, u, \text{left}(u))</td> </tr> <tr> <td>8: else if ( \text{right}(u) \neq v ) and ( \text{right}(u) ) is marked then</td> </tr> <tr> <td>9: \text{Refactor}(M_i, u, \text{right}(u))</td> </tr> <tr> <td>10: Unmark nodes in ( \text{cut}(M_i) )</td> </tr> </tbody> </table> Algorithm 3. \textsc{MacrogateRefactor}(\( \mathcal{G} \)) IV. SAT-based WEAKENING Previously described reductions follow the trend of fast circuit-based optimizations. We now present a novel approach combining the ideas of interpolant compaction and weakening. Given an interpolant \( I = \text{ITP}(A, B) \), a weaker (resp. stronger) interpolant \( I_w \) (resp. \( I_s \)) is another interpolant, such that \( I \rightarrow I_w \) (\( I_s \rightarrow I \)). Interpolant weakness and strength are dual concepts. Considering an interpolant \( I \) for \( A, B \), its complement \( \neg I \) is an interpolant for \( B, A \). A weaker interpolant for \( A, B \) corresponds to a stronger interpolant for \( B, A \). As mentioned in section I, interpolant strength and/or weakness can be related to the quality of the interpolant itself [13]. State-of-the-art approaches to interpolant strengthening/weakening are based on SAT proof transformations [28]. Interpolant re-computation is another straightforward and practical way to compact an interpolant and change its strength. Given \( I = \text{ItP}(A, B) \), we can generate a weaker interpolant \( I_w = \text{ItP}(I, B) \) or a stronger one \( I_s = \text{ItP}(A, \neg I) \). Empirically, we spend extra time, performing an additional interpolant computation, in order to obtain a better interpolant, where better could mean weaker/stronger and possibly more compact. Unfortunately, compaction is not guaranteed, as the size of the final interpolant depends on a SAT solver run. Experimentally, we have observed both increases and decreases in terms of interpolant size. Our strategy is to spend extra time by re-running a SAT solver query (either \( A \wedge \neg I \) or \( I \wedge B \)), while computing the new interpolant in a different way, that guarantees compaction. In the following, we outline the main steps of our weakening approach (strengthening is dual): - \( I \) is encoded as NNF, producing \( I_{NNF} \) - A Gate-Level Abstraction of \( I_{NNF} \) is performed, using a PBA approach: - SAT query \( I_{NNF} \wedge B \), guaranteed UNSAT, is solved and used to generate the UNSAT core \( C(I_{NNF} \wedge B) \), the full proof is not necessary - Using the UNSAT core, a proof-based abstraction of \( I_{NNF} \) is computed: \( I_{PBA} = \text{PBA}(I_{NNF}, C) \) - As a result of \( PBA \), fresh new variables \( \Delta \) at all cut (abstraction) points are introduced. So, \( \text{supp}(I_{PBA}) = \Gamma \cup \Delta \), with \( \Gamma = \text{supp}(A) \cap \text{supp}(B) \). The presence of these extra variables prevents \( I_{PBA} \) from being a correct interpolant. Efficient existential quantification of \( \Delta \) variables can be performed exploiting NNF encoding. In particular, \( \exists \Delta I_{PBA} \) is performed by replacing all variables in \( \Delta \) with a \( \top \) constant: \( I_{w,NNF} = I_{PBA}|_{\Delta = \{ \top, \top, \ldots, \top \}} \). - The compacted interpolant \( I_{w,NNF} \) is converted back to the (non NNF) AIG encoding. Encoding a circuit as NNF implies a certain cost in terms of size. However, we experimentally observed (see section V) that this cost is negligible for interpolants, since they originate as pure AND-OR circuits with negations limited at input boundaries. Conversely, we have the advantage of quantification by substitution. Given a Boolean function \( f(X, \Delta) \) in NNF form, with \( \Delta \) appearing only in non-negated form, \( \Delta \) can be existentially (resp. universally) quantified by substitution: \[ \exists \delta f(X, \Delta) = f(X, \top) \\ \forall \delta f(X, \Delta) = f(X, \bot) \] The top-level procedure is described in Algorithm 4. Given a node \( v \), the function \( CNF(v) \) is used to retrieve the CNF representation of \( f_v \). **Algorithm 4. \text{ItPWeaken}(I, B)** The algorithm shows weakening of \( I \) w.r.t. \( B \), being strengthening with \( A \) dual. Furthermore, we use PBA-based abstraction, whereas a CBA-based approach is possible as well. The proposed code unifies GLA (Gate-Level Abstraction) with existential quantification, as, given the UNSAT core \( C \), circuit nodes with a corresponding CNF variable not in \( C \) are immediately abstracted and replaced with the \( \top \) constant. V. EXPERIMENTAL RESULTS We implemented a prototype version of our interpolant compaction procedures on top of the PdTRAV tool [29], a state-of-the-art verification framework. Experimental data in this section provide an evaluation of the techniques proposed. Experiments were run on an Intel Core i7–3770, with 8 CPUs running at 3.40 GHz, 16 GBytes of main memory DDR III 1333, and hosting a Ubuntu 12.04 LTS Linux distribution. We set memory limits to 900 seconds (3600 for the weakening experiments) and 8 GB, respectively. We performed an extensive experimentation on a selected subset of interpolants used in [11]. These interpolants are extracted from publicly available benchmarks from the past HWMCC [20] suites and are represented as AIGs. We took into account also interpolants derived from software verification problems [12]. The former set is composed of 2472 instances, ranging from \( 1.1 \times 10^5 \) to \( 8.5 \times 10^6 \) nodes. The latter set is composed of 1872 instances, ranging from \( 4 \times 10^2 \) to \( 6 \times 10^4 \) nodes\(^3\). We gathered initial data from the first set of interpolants in order to purge easy instances. We considered easy those instances with less than \( 1.5 \times 10^4 \) nodes and for which our logic synthesis procedure was able to reach a fix-point within 150 seconds. The purged set of benchmarks, comprising 87 instances ranging from \( 4 \times 10^5 \) to \( 8.5 \times 10^6 \) nodes, was used to conduct a more in-depth experimentation. Figures 5 and 6 show the results obtained for compaction with logic synthesis (section III) and GLA-based weakening (section IV), respectively. Compaction techniques are applied incrementally, i.e., we always apply simplifications described in [11]\(^3\), followed by the techniques described in this paper. \(^3\)The interpolant circuits are available at http://imagroup.polito.it/index.php/download. \(^4\)With the exception of the most time-consuming, and less scalable, ITE-based decomposition. A. Compaction by Logic Synthesis In our experiments, we evaluated techniques of section III by applying them as follows. First the circuit is partitioned into clusters and macrogates. A trivial simplification is performed by removing each duplicated input from macrogates. Then DIRECTODCSIMPLIFY, MACROGATEREFACCTOR and TRANSITIVEODCSIMPLIFY are iterated in this order, recomputing the circuit partition between each call, until two consecutive iterations reduce the circuit size for less than 1%. For each benchmark, we first apply the AIG balancing procedure of ABC prior to applying any of the aforementioned techniques. We consider the size of interpolants after balancing as baseline for the following experimentation. In order to test individual contributions of the proposed techniques we performed an initial run with all simplifications enabled, we call this run IPSIMPLIFY, followed by a set of runs in which we selectively disabled them one at a time: NODIRECTODCSIMPLIFY, NOMACROGATEREFACCTOR and NOTRANSITIVEODCSIMPLIFY respectively. As a last test, we disabled our techniques altogether and performed ITP compaction using only standard logic synthesis (rewriting/refactoring, using the state-of-the-art ABC [30] tool). Figures 5a and 5b illustrate the cumulative size and execution time, respectively, over all the benchmarks. In both cases, the closer a line is to the x axis, the better the result. The two figures easily illustrate the compromise between execution time and potential size reduction obtained. On the one hand the purely ABC-based simplification is the best performing one, but it requires a significant amount of time. Different compaction rates are achievable with less computational effort adopting less aggressive approaches. We excluded timeouts from the visual representation. As mentioned in section II-D, the size of implication lists could be a limit to the scalability of the proposed methods, as well. Although such lists could theoretically grow quadratically in the number of nodes, experimentally we noticed at worst a multiplicative factor of 20. B. Compaction by Weakening In order to characterize the rate of ITP compaction achievable through SAT-based weakening/strengthening, we raised the time limits to 3600 seconds. Such an approach is conceivable when used if ITP size reduction is crucial, and/or weakening/strengthening are actually the target, which motivates a bigger effort in terms of total execution time. A preliminary step for all the proposed techniques requires to convert a given interpolant into NNF form. This step could lead to an increase in circuit size up to a factor of 2, in the general case. Given the nature and structure of interpolants themselves the increase in size is almost negligible. Taking into account all the experiments conducted, the biggest experienced increase was below 0.5%, confirming the intuitive arguments in section IV. We conducted a set of experiments taking into account the same subset of 87 interpolants, iterating sequences of weakening (labelled \(B\)) and/or strengthening (labelled \(A\)) steps in different patterns. We propose an experimental evaluation for six different sequences: \(A, B, AB, BA, ABAB\) and \(BABA\). We run our logic synthesis compaction procedure before any weakening/strengthening attempt (baseline). Figures 6a and 6b illustrate the cumulative size and execution time, respectively, over all the benchmarks. It is fairly noticeable the impact on the choice of the first kind of chosen compaction: starting with \(B\) tends to produce better results, related to the fact that most of the interpolants proposed have more room for weakening than strengthening. Overall, it is fairly clear that SAT-based abstraction leads to dramatic compaction, though paid in terms of time. VI. CONCLUSIONS We addressed the problem of optimizing interpolants size for SAT-based UMC. Our main contribution is to provide an integrated approach, that targets interpolation compaction, providing different tradeoffs between time and memory according the proper context of application. We work both at the level of logic synthesis and at SAT level, proposing different techniques aimed at interpolant size reduction. Overall, our main target is to increase the scalability of existing UMC approaches, taking into account resource limitations and compromising between optimal results and applicability of the proposed methods. We experimentally observed that the proposed optimizations can be beneficial to existing reachability schemes, based on interpolation. VII. ACKNOWLEDGEMENTS We thank prof. Natasha Sharygina, dr. Antti E. J. Hyvärinen and Leonardo Alt from Università della Svizzera Italiana (USI), Switzerland, for the benchmarks generated from software verification problems. REFERENCES Fig. 5: Cumulative results of ITP compaction based on logic synthesis, in terms of size and execution time. Fig. 6: Cumulative results of ITP compaction based on SAT, in terms of size and execution time. Sizes are plotted on a log scale given the higher ratio of compaction achieved.
{"Source-Url": "https://iris.polito.it/bitstream/11583/2654916/1/fmcad2016A4.pdf", "len_cl100k_base": 10423, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 35891, "total-output-tokens": 11850, "length": "2e13", "weborganizer": {"__label__adult": 0.0005464553833007812, "__label__art_design": 0.0005879402160644531, "__label__crime_law": 0.0005612373352050781, "__label__education_jobs": 0.0006852149963378906, "__label__entertainment": 0.00011771917343139648, "__label__fashion_beauty": 0.00027251243591308594, "__label__finance_business": 0.0004363059997558594, "__label__food_dining": 0.0005221366882324219, "__label__games": 0.001049041748046875, "__label__hardware": 0.00751495361328125, "__label__health": 0.0008335113525390625, "__label__history": 0.0004243850708007813, "__label__home_hobbies": 0.0002236366271972656, "__label__industrial": 0.001499176025390625, "__label__literature": 0.0002639293670654297, "__label__politics": 0.0004427433013916016, "__label__religion": 0.0008263587951660156, "__label__science_tech": 0.2479248046875, "__label__social_life": 9.590387344360352e-05, "__label__software": 0.007190704345703125, "__label__software_dev": 0.7255859375, "__label__sports_fitness": 0.00048422813415527344, "__label__transportation": 0.0014801025390625, "__label__travel": 0.000286102294921875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42628, 0.01563]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42628, 0.51591]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42628, 0.87999]], "google_gemma-3-12b-it_contains_pii": [[0, 1232, false], [1232, 6485, null], [6485, 12014, null], [12014, 18023, null], [18023, 23667, null], [23667, 29797, null], [29797, 35708, null], [35708, 42344, null], [42344, 42628, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1232, true], [1232, 6485, null], [6485, 12014, null], [12014, 18023, null], [18023, 23667, null], [23667, 29797, null], [29797, 35708, null], [35708, 42344, null], [42344, 42628, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42628, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42628, null]], "pdf_page_numbers": [[0, 1232, 1], [1232, 6485, 2], [6485, 12014, 3], [12014, 18023, 4], [18023, 23667, 5], [23667, 29797, 6], [29797, 35708, 7], [35708, 42344, 8], [42344, 42628, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42628, 0.12435]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
2e50c88a029e88ba8b6ceee87971bafc3496d1d9
TUPA at MRP 2019: A Multi-Task Baseline System Daniel Hershcovich* and Ofir Arviv** *University of Copenhagen, Department of Computer Science **Hebrew University of Jerusalem, School of Computer Science and Engineering Abstract This paper describes the TUPA system submission to the shared task on Cross-Framework Meaning Representation Parsing (MRP) at the 2019 Conference for Computational Language Learning (CoNLL). TUPA provides a baseline point of comparison and is not considered in the official ranking of participating systems. While originally developed for UCCA only, TUPA has been generalized to support all MRP frameworks included in the task, and trained using multi-task learning to parse them all with a shared model. It is a transition-based parser with a BiLSTM encoder, augmented with BERT contextualized embeddings. 1 Introduction TUPA (Transition-based UCCA/Universal Parser; Hershcovich et al., 2017) is a general transition-based parser for directed acyclic graphs (DAGs), originally designed for parsing text to graphs in the UCCA framework (Universal Conceptual Cognitive Annotation; Abend and Rappoport, 2013). It was used as the baseline system in SemEval 2019 Task 1: Cross-lingual Semantic Parsing with UCCA (Hershcovich et al., 2019b), where it was outranked by participating team submissions in all tracks (open and closed in English, German and French), but was also among the top 5 best-scoring systems in all tracks, and reached second place in the English closed tracks. Being a general DAG parser, TUPA has been shown (Hershcovich et al., 2018a,b) to support other graph-based meaning representations and similar frameworks, including UD (Universal Dependencies; Nivre et al., 2019), which was the focus of CoNLL 2017 and 2018 Shared Tasks (Zeman et al., 2017, 2018); AMR (Abstract Meaning Representation; Banerescu et al., 2013), targeted in SemEval 2016 and 2017 Shared Tasks (May, 2016; May and Priyadarshi, 2017); and DM (DELPH-IN MRS Bi-Lexical Dependencies; Ivanova et al., 2012), one of the target representations, among PAS and PSD (Prague Semantic Dependencies; Hajic et al., 2012; Miyao et al., 2014), in the SemEval 2014 and 2015 Shared Tasks on SDP (Semantic Dependency Parsing; Oepen et al., 2014, 2015, 2016). DM is converted from DeepBank (Flickinger et al., 2012), a corpus of hand-corrected parses from LinGO ERG (Copestake and Flickinger, 2000), an HPSG (Pollard and Sag, 1994) using Minimal Recursion Semantics (Copestake et al., 2005). EDS (Elementary Dependency Structures; Oepen and Lønning, 2006) is another framework derived from ERG, encoding English Resource Semantics in a variable-free semantic dependency graph. The CoNLL 2019 Shared Task (Oepen et al., 2019) combines five frameworks for graph-based meaning representation: DM, PSD, EDS, UCCA and AMR. For the task, TUPA was extended to support the MRP format and frameworks, and is used as a baseline system, both as a single-task system trained separately on each framework, and as a multi-task system trained on all of them. The code is publicly available. 2 Intermediate Graph Representation Meaning representation graphs in the shared tasks are distributed in, and expected to be parsed to, a uniform graph interchange format, serialized as JSON Lines. The formalism encapsulates annotation for graphs containing nodes (corresponding either to text tokens, concepts, or logical predications), with the following components: top nodes, node After graduation, John moved to New York City. Figure 1: Left: AMR graph, in the MRP formalism, for the sentence “After graduation, John moved to New York City.” Edge labels are shown on the edges. Node labels are shown inside the nodes, along with any node properties (in the form property=value). The text tokens are not part of the graph, and are matched to nodes by automatic alignment (anchoring). Right: converted AMR graph in the intermediate graph representation. Same as in the intermediate graph representation for all frameworks, it contains a virtual root node attached to the graph’s top node with a TOP edge, and virtual terminal nodes corresponding to text tokens, attached according to the anchoring (or, for AMR, the provided automatic alignments) with ANCHOR edges. Same as for all frameworks with node labels and properties (i.e., all but UCCA), labels and properties are replaced with placeholders corresponding to anchored tokens, where possible. The placeholder ⟨ℓ⟩ corresponds to the concatenated lemmas of anchored tokens. Specifically for AMR, name operator properties (e.g., op∗ for New York City) are collapsed to single properties. 2.1 Roots and Anchors TUPA supports parsing to rooted graphs with labeled edges, and with the text tokens as terminals (leaves), which is the standard format for UCCA graphs. However, MRP graphs are not given in this format, since there may be multiple roots and the text tokens are only matched to the nodes by anchoring (and not by explicit edges). For the CoNLL 2019 Shared Task, TUPA was extended to support node labels, node properties, and edge attributes (see §3.1). Top nodes and anchoring are combined into the graph by adding a virtual root node and virtual terminal nodes, respectively, during preprocessing. A virtual terminal node is created per token according to the tokenization predicted by UDPipe (Straka and Straková, 2017) and provided as companion data by the task organizers. All top nodes are attached as children of the virtual root with a Top-labeled edge. Nodes with anchoring are attached to the virtual terminals associated with the tokens whose character spans intersect with their anchoring, with ANCHOR-labeled edges. Note that anchoring is automatically determined for training in the case of AMR, using the alignments from the companion data, computed by the ISI aligner (Pourdamghani et al., 2014). There is no special treatment of non-trivial anchoring for EDS: in case a node is anchored to multiple tokens (as is the case for multi-word expressions), they are all attached with ANCHOR-labeled edges, resulting in possibly multiple parents for some virtual terminal nodes. During inference, after TUPA returns an output graph, the virtual root and terminals are removed as postprocessing to return the final graph. Top nodes and anchoring are then inserted accordingly. 2.2 Placeholder Insertion The number of distinct node labels and properties is very large for most frameworks, resulting in severe sparsity, as they are taken from an open vo- Figure 2: The TUPA-MRP transition set. We write the stack with its top to the right and the buffer with its head to the left; the set of edges is also ordered with the latest edge on the right. Node, Label, Property and Attribute require that \( x \not\in \text{root} \); Child, Label, Property, Left-Edge and Right-Edge require that \( x \not\in w_{1..n} \); Attribute requires that \( y \not\in w_{1..n} \); Left-Edge and Right-Edge require that \( y \not\in \text{root} \) and that there is no directed path from \( y \) to \( x \); and Swap requires that \( i(x) < i(y) \), where \( i(x) \) is the swap index (§3.5). --- 3 Transition-based Meaning Representation Parser TUPA is a transition-based parser (Nivre, 2003), constructing graphs incrementally from input tokens by applying transitions (actions) to the parser state (configuration). The parser state is composed of a buffer \( B \) of tokens and nodes to be processed, a stack \( S \) of nodes currently being processed, and an incrementally constructed graph \( G \). Some states are marked as terminal, meaning that \( G \) is the final output. The input to the parser is a sequence of tokens: \( w_1, \ldots, w_n \). Parsing starts with a (virtual) root node on the stack, and the input tokens in the buffer, as (virtual) terminal nodes. Given a gold-standard graph and a parser state, an oracle returns the set of gold transitions to apply at the next step, i.e., all transitions that preserve the reachability of the gold target graph. A classifier is trained using the oracle to select the next transition based on features encoding the parser’s current state, where the training objective is to maximize the sum of log-likelihoods of all gold transitions at each step. If there are multiple gold transitions, the highest-scoring one is taken in training. Inference is performed greedily: the highest-scoring transition is always taken. Formally, the incrementally constructed graph \( G \) consists of \( (V, E, \ell_V, \ell_E, p, a) \), where \( V \) is the set of nodes, \( E \) is the sequence of directed edges, \( \ell_V : V \rightarrow L_V \) is the node label function, \( L_V \) being the set of possible node labels, \( \ell_E : E \rightarrow L_E \) is the edge label function, \( L_E \) being the set of possible edge labels, \( p : V \rightarrow \mathcal{P}(P) \) is the node property function, \( P \) being the set of possible node property-value pairs, and \( a : E \rightarrow \mathcal{P}(A) \) is the edge attribute function, \( A \) being the set of possible edge attribute-value pairs (a node may have any number of properties; an edge may have any number of attributes). ### 3.1 Transition Set The set of possible transitions in TUPA is based on a combination of transition sets from other parsers, designed to support reentrancies (Sagae and Tsujii, 2008; Tokgöz and Eryiğit, 2015), discontinuities (Nivre, 2009; Maier, 2015; Maier and Lichte, 2016) and non-terminal nodes (Zhu et al., 2013). Beyond the original TUPA transitions (Hershcovitch et al., 2017, 2018a), for the CoNLL 2019 Shared Task, transitions are added to support node labels, node properties, and edge attributes. Additionally, top nodes and node anchoring are encoded by special edges from a virtual root node and to virtual terminal nodes (corresponding to text tokens), respectively (see §2). The TUPA-MRP transition set is shown in Figure 2. It includes the following original TUPA transitions: the standard \texttt{SHIFT} and \texttt{REDUCE} operations (to move a node from the buffer to the stack and to discard a stack node, respectively), \texttt{NODE}_X for creating a new non-terminal node and an \( X \)-labeled edge (so that the new node is a parent of the stack top), \texttt{LEFT-EDGE}_X and \texttt{RIGHT-EDGE}_X to create a new \( X \)-labeled edge, \texttt{SWAP} to handle discontinuous nodes (moving the second topmost stack node back to the buffer), and \texttt{FINISH} to mark the state as terminal. Besides the original TUPA transitions, TUPA-MRP contains a \textsc{Child} transition to create unanchored children for existing nodes (like \textsc{Node}, but the new node is a \textit{child} of the stack top),\footnote{While UCCA contains unanchored (\textit{implicit}) nodes corresponding to non-instantiated arguments or predicates, the original TUPA disregards them as they are not included in standard UCCA evaluation. The CoNLL 2019 Shared Task omits implicit UCCA nodes too, in fact, but the \textsc{Child} transition is included to support unanchored nodes in AMR, and is not used otherwise.} a \textsc{Label} transition to select a label for an existing node (either the stack top of the second topmost stack node), a \textsc{Property} transition to select a property-value pair for an existing node, and an \textsc{Attribute} transition to select an attribute-value pair for an existing edge (the last created edge). The original TUPA transitions \textsc{Left-Remote}_\textsc{X} and \textsc{Right-Remote}_\textsc{X}, creating new \textit{remote} edges (a UCCA-specific distinction), are omitted. Remote edges are encoded instead as edges with the \textit{remote} attribute, and are supported by the combination of \textsc{Edge} and \textsc{Attribute} transitions. In contrast to the original TUPA transitions, \textsc{Edge} transitions are allowed to attach multiple parents to a node. ### 3.2 Transition Classifier To predict the next transition at each step, TUPA uses a BiLSTM module followed by an MLP and a softmax layer for classification (Kiperwasser and Goldberg, 2016). The model is illustrated in Figure 3. The BiLSTM module (illustrated in more detail in Figure 4) is applied before the transition sequence starts, running over the input tokenized sequence. It consists of a pre-BiLSTM MLP with feature embeddings (§3.3) and pre-trained contextualized embeddings (§3.4) concatenated as inputs, followed by (multiple layers of) a bidirectional recurrent neural network (Schuster and Paliwal, 1997; Graves, 2008) with a long short-term memory cell (Hochreiter and Schmidhuber, 1997). While edge labels are combined into the identity of the transition (so that for example, \textsc{Left-Edge}_\textsc{P} and \textsc{Left-Edge}_\textsc{S} are separate transitions in the output), there is just one transition for each of \textsc{Label}, \textsc{Property} and \textsc{Attribute}. After each time one of these transition is selected, an additional classifier is evoked with the set of possible values for the currently parsed framework. This hard separation is made due to the large number of node labels and properties in the MRP frameworks. Since there is only one possible edge attribute value (\textit{remote} for UCCA), performing this transition always results in this value being selected. ### 3.3 Features In both training and testing, we use vector embeddings representing the lemmas, coarse POS tags (UPOS) and fine-grained POS tags (XPOS). These feature values are provided by UDPipe as companion data by the task organizers. In addition, we use punctuation and gap type features (Maier and Lichte, 2016), and previously predicted node and edge labels, node properties, edge attributes and parser actions. These embeddings are initialized randomly (Glorot and Bengio, 2010). To the feature embeddings, we concatenate numeric features representing the node height, number of parents and children, and the ratio between the number of terminals to total number of nodes in the graph \(G\). Numeric features are taken as they are, whereas categorical features are mapped to real-valued embedding vectors. For each non-terminal node, we select a \textit{head terminal} for feature extraction, by traversing down the graph, selecting the first outgoing edge each time according to alphabetical order of labels. ### 3.4 Pre-trained Contextualized Embeddings Contextualized representation models such as BERT (Devlin et al., 2019) have recently achieved state-of-the-art results on a diverse array of downstream NLP tasks, gaining improved results compared to non-contextual representations. We use the weighted sum of last four hidden layers of a BERT pre-trained model as extra input features.\footnote{We used the \texttt{bert-large-cased} model from \url{https://github.com/huggingface/pytorch-transformers}.} BERT uses a wordpiece tokenizer (Wu et al., 2016), which segments all text into sub-word units, while TUPA uses the UDPipe tokenization. To maintain alignment between wordpieces and tokens, we use a summation of the outputs of BERT vectors corresponding to the wordpieces of each token as its representation. 3.5 Constraints As each annotation scheme has different constraints on the allowed graph structures, we apply these constraints separately for each task. During training and parsing, the relevant constraint set rules out some of the transitions according to the parser state. Some constraints are task-specific, others are generic. For example, in AMR, a node with an incoming \texttt{NAME} edge must have the \texttt{NAME} label. In UCCA, a node may have at most one outgoing edge with label $\in \{\text{PROCESS, STATE}\}$. An example of a generic constraint is that stack nodes that have been swapped should not be swapped again, to avoid infinite loops in inference. To implement this constraint, we define a \textit{swap index} for each node, assigned when the node is created. At initialization, only the root node and terminals exist. We assign the root a swap index of 0, and for each terminal, its position in the text (starting at 1). Whenever a node is created as a result of a \texttt{NODE} or \texttt{CHILD} transition, its swap index is the arithmetic mean of the swap indices of the stack top and buffer head. While this constraint may theoretically limit the ability to parse arbitrary graphs, in practice we find that all graphs in the shared task training set can still be reached without violating it. 4 Multi-Task Learning Whereas in the single-task setting TUPA is trained separately on each framework as described above, in the multi-task setting, all frameworks share a BiLSTM for encoding the input. In addition, each framework has a framework-specific BiLSTM, private to it. Each framework has its own MLP on top of the concatenation of the shared and framework-specific BiLSTM (see Figure 3). <table> <thead> <tr> <th>Hyperparameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>Lemma dim.</td> <td>200</td> </tr> <tr> <td>UPOS dim.</td> <td>20</td> </tr> <tr> <td>XPOS dim.</td> <td>20</td> </tr> <tr> <td>Dep. rel. dim.</td> <td>10</td> </tr> <tr> <td>Punct. dim.</td> <td>1</td> </tr> <tr> <td>Action dim.</td> <td>3</td> </tr> <tr> <td>Node label dim.</td> <td>20</td> </tr> <tr> <td>Node prop. dim.</td> <td>20</td> </tr> <tr> <td>Edge label dim.</td> <td>20</td> </tr> <tr> <td>Edge attrib. dim.</td> <td>1</td> </tr> <tr> <td>MLP layers</td> <td>2</td> </tr> <tr> <td>MLP dim.</td> <td>50</td> </tr> <tr> <td>Shared BiLSTM layers</td> <td>2</td> </tr> <tr> <td>Shared BiLSTM dim.</td> <td>500</td> </tr> <tr> <td>Shared pre-BiLSTM MLP layers</td> <td>1</td> </tr> <tr> <td>Shared pre-BiLSTM MLP dim.</td> <td>300</td> </tr> <tr> <td>Private BiLSTM layers</td> <td>2</td> </tr> <tr> <td>Private BiLSTM dim.</td> <td>500</td> </tr> <tr> <td>Private pre-BiLSTM MLP layers</td> <td>1</td> </tr> <tr> <td>Private pre-BiLSTM MLP dim.</td> <td>300</td> </tr> </tbody> </table> Table 1: Hyperparameter settings. For node labels and properties and for edge attributes (when applicable), an additional “axis” (private BiLSTM and MLP) is added per framework (e.g., AMR node labels are predicted separately and with an identical architecture to AMR transitions, except the output dimension is different). This is true for the single-task setting too, so in fact the single-task setting is multi-task over \{transitions, node labels, node properties, edge attributes\}. 5 Training details The model is implemented using DyNet v2.1 (Neubig et al., 2017).\footnote{\url{http://dynet.io}} Unless otherwise noted, we use the default values provided by the package. We use the same hyperparameters as used in previous experiments on UCCA parsing (Hershcovich et al., 2018a), without any hyperparameter tuning on the CoNLL 2019 data. 5.1 Hyperparameters We use dropout (Srivastava et al., 2014) between MLP layers, and recurrent dropout (Gal and Ghahramani, 2016) between BiLSTM layers, both with $p = 0.4$. We also use word, lemma, coarse- and fine-grained POS tag dropout with $\alpha = 0.2$. Table 2: Official test MRP F-scores (in %) for TUPA (single-task and multi-task). For comparison, the highest score achieved for each framework and evaluation set is shown. <table> <thead> <tr> <th>Offi-</th> <th>TUPA (single-task)</th> <th>TUPA (multi-task)</th> <th>Best System</th> </tr> </thead> <tbody> <tr> <td>cial</td> <td>ALL LPPS</td> <td>ALL LPPS</td> <td>ALL LPPS</td> </tr> <tr> <td>DM</td> <td>55.54</td> <td>58.60</td> <td>42.69</td> </tr> <tr> <td>PSD</td> <td>51.76</td> <td>58.87</td> <td>52.65</td> </tr> <tr> <td>EDS</td> <td>81.00</td> <td>81.36</td> <td>73.95</td> </tr> <tr> <td>UCCA</td> <td>27.56</td> <td>40.06</td> <td>23.65</td> </tr> <tr> <td>AMR</td> <td>44.73</td> <td>47.04</td> <td>33.75</td> </tr> <tr> <td>Overall</td> <td>57.70</td> <td>57.55</td> <td>45.34</td> </tr> </tbody> </table> (Kiperwasser and Goldberg, 2016): in training, the embedding for a feature value \(w\) is replaced with a zero vector with a probability of \(\frac{\alpha}{\#(w)+\alpha}\), where \(\#(w)\) is the number of occurrences of \(w\) observed. In addition, we use node dropout (Hershcovitch et al., 2018a): with a probability of 0.1 at each step, all features associated with a single node in the parser state are replaced with zero vectors. For optimization we use a minibatch size of 100, decaying all weights by \(10^{-5}\) at each update, and train with stochastic gradient descent for 50 epochs with a learning rate of 0.1, followed by AMSGrad (Sashank J. Reddi, 2018) for 250 epochs with \(\alpha = 0.001, \beta_1 = 0.9\) and \(\beta_2 = 0.999\). Table 1 lists other hyperparameter settings. 5.2 Official Evaluation For the official evaluation, we did not use a development set, and trained on the full training set for as many epochs as the evaluation period allowed for. The multi-task model completed just 3 epoch of training. The single task models completed 12 epochs for DM, 22 epochs for PSD, 14 epochs for EDS, 100 epochs for UCCA (the maximum number we allowed) and 13 epochs for AMR. Due to an oversight resulting from code re-use, in the official evaluation we used non-whitelisted resources. Specifically, for AMR, we used a constraint forcing any node whose label corresponds to a PropBank (Palmer et al., 2005) frame to only have the core arguments defined for the frame. We obtained the possible arguments per frame from the PropBank frame files. Additionally, for the intermediate graph representation, we used placeholders for tokens’ negation, verb, noun and adjective form, as well as organizational and relational roles, from a pre-defined lexicon included in the AMR official resources. This is similar to the delexicalization employed by Buys and Blunsom (2017a) for AMR parsing. 5.3 Post-evaluation Training After the evaluation period, we continued training for a longer period of time, using a slightly modified system: we used only resources whitelisted by the task organizers in the post-evaluation training, removing the constraints and placeholders based on PropBank and AMR lexicons. In this setting, training is done over a shuffled mix of the training set for all frameworks (no special sampling is done to balance the number of instances per framework), and a development set of 500 instances per framework (see § 5.1). We select the epoch with the best average MRP F-score score on a development set, selected by sampling 500 random training instances from each framework (the development instances are excluded from the training set). The large multi-task model only completed 4 training epochs in the available time, the single-task models completed 24 epochs for DM, 31 epochs for PSD, 25 epochs for EDS, 69 epochs for UCCA and 23 epochs for AMR. 6 Results Table 2 presents the averaged scores on the test sets in the official evaluation (§ 5.2), for TUPA and for the best-performing system in each framework and evaluation set. Since non-whitelisted resources were used, the TUPA scores cannot be taken as a baseline. Furthermore, due to insufficient training time, all models but the UCCA one are underfitting, while the UCCA model is overfitting due to excessive training without early stopping (no development set was used in this setting). Table 3: Post-evaluation test scores (in %) for TUPA (single-task and multi-task), using the MRP F-score (left), and using Native Evaluation (middle): labeled SDP F-score for DM and PSD, EDM F-score for EDS, primary labeled F-score for UCCA, and Smatch for AMR. The rightmost column (Trans./Token Ratio) shows the mean ratio between length of oracle transition sequence and sentence length, over the training set. <table> <thead> <tr> <th>Evaluation</th> <th>Post-evaluation Test Scores</th> <th>Native Evaluation Test Scores</th> <th>Trans./Token Ratio</th> </tr> </thead> <tbody> <tr> <td></td> <td>TUPA (single-task)</td> <td>TUPA (multi-task)</td> <td>TUPA (single-task)</td> </tr> <tr> <td></td> <td>ALL</td> <td>LPPS</td> <td>ALL</td> </tr> <tr> <td>DM</td> <td>75.57</td> <td>80.46</td> <td>62.16</td> </tr> <tr> <td>PSD</td> <td>70.86</td> <td>70.62</td> <td>65.95</td> </tr> <tr> <td>EDS</td> <td>84.85</td> <td>85.36</td> <td>79.39</td> </tr> <tr> <td>UCCA</td> <td>77.69</td> <td>82.15</td> <td>64.05</td> </tr> <tr> <td>AMR</td> <td>53.85</td> <td>53.47</td> <td>39.00</td> </tr> <tr> <td>Overall</td> <td>75.73</td> <td>77.63</td> <td>66.01</td> </tr> </tbody> </table> 6.1 Post-evaluation Results Table 3 presents the averaged scores on the test sets for the post-evaluation trained models (§5.3). Strikingly, the multi-task TUPA consistently falls behind the single-task one, for each framework separately and in the overall score. This stems from several factors, namely that the sharing strategy could be improved, but mainly since the multi-task model is probably underfitting due to insufficient training. We conclude that better efficiency and faster training is crucial for practical applicability of this approach. Perhaps a smaller multi-task model would have performed better by training on more data in the available time frame. 6.2 Diagnostic Evaluation The rightmost column of Table 3 displays the mean ratio between length of oracle transitions sequence and sentence length by framework, over the shared task training set. Scores are clearly better as the framework has longer oracle transition sequences, perhaps because many of the transitions are “easy” as they correspond to structural elements of the graphs or properties copied from the input tokens. 6.3 Comparability with Previous Results Previous published results of applying TUPA to UCCA parsing (Hershcovitch et al., 2017, 2018a, 2019b,a) used a different version of the parser, without contextualized word representations from BERT. For comparability with previous results, we train and test an identical model to the one presented in this paper, on the SemEval 2019 Task 1 data (Hershcovitch et al., 2019b), which is UCCA-only, but contains tracks in English, German and French. For this experiment, we use bert-multilingual instead of bert-large-cased, and train a shared model over all three languages. A 50-dimensional learned language embedding vector is concatenated to the input. Word, lemma and XPOS features are not used. No multi-task learning with other frameworks is employed. The results are shown in Table 4. While improvement is achieved uniformly over the previous TUPA scores, even with BERT, TUPA is outperformed by the shared task winners (Jiang et al., 2019). Note that Jiang et al. (2019) also used bert-multilingual in the open tracks. We also train and test TUPA with BERT embeddings on v1.0 of the UCCA English Web Treebank (EWT) reviews dataset (Hershcovitch et al., 2019a). While the EWT reviews are included in the MRP shared task UCCA data, the different format and preprocessing makes for slightly different scores, so we report the scores for comparability with previous work in Table 5. We again see pronounced improvements from incorporating pretrained contextualized embeddings into the model. 7 Related Work Transition-based meaning representation parsing dates back already to semantic dependency parsing work by Sagae and Tsujii (2008); Tokgöz and Eryiğit (2015), who support a DAG structure by allowing multiple parents to be created by EDGE transitions, and by Titov et al. (2009), who applied a SWAP transition (Nivre, 2008) for online reordering of nodes to support non-projectivity. Transition-based parsing was applied to AMR Table 4: Test UCCA F-score scores (in %) on all edges, primary edges and remote edges, on the SemEval 2019 Task 1 data. The previous published TUPA scores are shown (TUPA w/o BERT), as well as scores for TUPA with BERT contextualized embeddings, TUPA (w/ BERT), averaged over three separately trained models in each setting, differing only by random seed (standard deviation < 0.03); and the scores for the best-scoring system from that shared task. <table> <thead> <tr> <th></th> <th>All</th> <th>Prim.</th> <th>Rem.</th> </tr> </thead> <tbody> <tr> <td>English-Wiki (open)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>TUPA (w/o BERT)</td> <td>73.5</td> <td>73.9</td> <td>53.5</td> </tr> <tr> <td>TUPA (w/ BERT)</td> <td>77.8</td> <td>78.3</td> <td>57.4</td> </tr> <tr> <td>Jiang et al. (2019)</td> <td><strong>80.5</strong></td> <td><strong>81.0</strong></td> <td><strong>58.8</strong></td> </tr> <tr> <td>English-20K (open)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>TUPA (w/o BERT)</td> <td>68.4</td> <td>69.4</td> <td>25.9</td> </tr> <tr> <td>TUPA (w/ BERT)</td> <td>74.9</td> <td>75.7</td> <td><strong>44.0</strong></td> </tr> <tr> <td>Jiang et al. (2019)</td> <td><strong>76.7</strong></td> <td><strong>77.7</strong></td> <td><strong>39.2</strong></td> </tr> <tr> <td>German-20K (open)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>TUPA (w/o BERT)</td> <td>79.1</td> <td>79.6</td> <td>59.9</td> </tr> <tr> <td>TUPA (w/ BERT)</td> <td>81.3</td> <td>81.6</td> <td><strong>69.2</strong></td> </tr> <tr> <td>Jiang et al. (2019)</td> <td><strong>84.9</strong></td> <td><strong>85.4</strong></td> <td><strong>64.1</strong></td> </tr> <tr> <td>French-20K (open)</td> <td></td> <td></td> <td></td> </tr> <tr> <td>TUPA (w/o BERT)</td> <td>48.7</td> <td>49.6</td> <td>2.4</td> </tr> <tr> <td>TUPA (w/ BERT)</td> <td>72.0</td> <td>72.8</td> <td><strong>45.8</strong></td> </tr> <tr> <td>Jiang et al. (2019)</td> <td><strong>75.2</strong></td> <td><strong>76.0</strong></td> <td><strong>43.3</strong></td> </tr> </tbody> </table> Table 5: Test UCCA F-score scores (in %) on all edges, primary edges and remote edges, on the UCCA EWT reviews data. TUPA (w/o BERT) is from (Hershcovich et al., 2019a). TUPA (w/ BERT) is averaged over three separately trained models in each setting, differing only by random seed (standard deviation < 0.03). <table> <thead> <tr> <th></th> <th>All</th> <th>Prim.</th> <th>Rem.</th> </tr> </thead> <tbody> <tr> <td>TUPA (w/o BERT)</td> <td>71.0</td> <td>72.1</td> <td>47.0</td> </tr> <tr> <td>TUPA (w/ BERT)</td> <td><strong>75.2</strong></td> <td><strong>76.1</strong></td> <td><strong>54.8</strong></td> </tr> </tbody> </table> 8 Conclusion We have presented TUPA, a baseline system in the CoNLL 2019 shared task on Cross-Framework Meaning Representation. TUPA is a general transition-based DAG parser, which is trained with multi-task learning on multiple frameworks. Its input representation is augmented with BERT contextualized embeddings. Acknowledgments We are grateful for the valuable feedback from the anonymous reviewers. We would like to thank the other task organizers, Stephan Oepen, Omri Abend, Jan Hajič, Tim O’Gorman and Nianwen Xue, for valuable discussions and tips on developing the baseline systems, as well as for providing the data, evaluation metrics and information on the various frameworks. References Sanjiv Kumar Sashank J. Reddi, Satyen Kale. 2018. On the convergence of Adam and beyond. ICLR.
{"Source-Url": "https://static-curis.ku.dk/portal/files/239516691/OA_TUPA.pdf", "len_cl100k_base": 8220, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 45915, "total-output-tokens": 13571, "length": "2e13", "weborganizer": {"__label__adult": 0.0005807876586914062, "__label__art_design": 0.0013895034790039062, "__label__crime_law": 0.0005998611450195312, "__label__education_jobs": 0.004161834716796875, "__label__entertainment": 0.0004901885986328125, "__label__fashion_beauty": 0.0003862380981445313, "__label__finance_business": 0.0005393028259277344, "__label__food_dining": 0.0004854202270507813, "__label__games": 0.0012340545654296875, "__label__hardware": 0.00101470947265625, "__label__health": 0.0010023117065429688, "__label__history": 0.0006685256958007812, "__label__home_hobbies": 0.00013649463653564453, "__label__industrial": 0.0006914138793945312, "__label__literature": 0.00377655029296875, "__label__politics": 0.0005526542663574219, "__label__religion": 0.001068115234375, "__label__science_tech": 0.36328125, "__label__social_life": 0.00032138824462890625, "__label__software": 0.037628173828125, "__label__software_dev": 0.57861328125, "__label__sports_fitness": 0.00042891502380371094, "__label__transportation": 0.00074005126953125, "__label__travel": 0.000301361083984375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 44691, 0.06705]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 44691, 0.16871]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 44691, 0.81685]], "google_gemma-3-12b-it_contains_pii": [[0, 0, null], [0, 3471, false], [3471, 6522, null], [6522, 8081, null], [8081, 10508, null], [10508, 14635, null], [14635, 18780, null], [18780, 23106, null], [23106, 27731, null], [27731, 31747, null], [31747, 35646, null], [35646, 40208, null], [40208, 44691, null]], "google_gemma-3-12b-it_is_public_document": [[0, 0, null], [0, 3471, true], [3471, 6522, null], [6522, 8081, null], [8081, 10508, null], [10508, 14635, null], [14635, 18780, null], [18780, 23106, null], [23106, 27731, null], [27731, 31747, null], [31747, 35646, null], [35646, 40208, null], [40208, 44691, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, true], [5000, 44691, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 44691, null]], "pdf_page_numbers": [[0, 0, 1], [0, 3471, 2], [3471, 6522, 3], [6522, 8081, 4], [8081, 10508, 5], [10508, 14635, 6], [14635, 18780, 7], [18780, 23106, 8], [23106, 27731, 9], [27731, 31747, 10], [31747, 35646, 11], [35646, 40208, 12], [40208, 44691, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 44691, 0.29032]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
5038524bff7abacc2d684c70c61278eaa6935695
xAMP: A Protocol Suite for Group Communication INESC Technical Report RT/___-92 L. Rodrigues, P. Veríssimo January 1992 LIMITED DISTRIBUTION NOTICE A shorter version of this report was published in the Proceedings of the 11th Symposium On Reliable Distributed Systems, Oct, 1992, Houston, Texas, © 1992 IEEE. In view of copyright protection, its distribution is limited to peer communications and specific requests. Abstract The xAMP is a highly versatile group communications service aimed at supporting the development of distributed applications, with different dependability, functionality, and performance requirements. This paper describes the services provided by xAMP and the protocols used to implement them. These range from unreliable and non-ordered to atomic multicast, and are enhanced by efficient group addressing and management support. The basic protocols are synchronous, clock-less and designed to be used over broadcast local-area networks, and portable to a number of them. The functionality provided yields a reasonably complete solution to the problem of reliable group communication. Whilst other protocols exist that offer similar services, we follow a new engineering approach by deriving all qualities of service from a single basic procedure. Thus, their implementation shares data structures, procedures, failure-recovery algorithms and group monitor services, resulting in an highly integrated package. 1 Introduction Distributed systems are widely used today, encouraging the development of applications that require progressively more of the distribution support. Well-known styles of distributed computing such as RPC and client-servers, are very useful and mature. Its widespread use was in fact necessary to show that its point-to-point and request-with-reply nature does not provide an universal solution to the increasing demands of developers of distributed applications, requiring complex and highly concurrent interactions between several participants, whose membership may be largely dynamic. Striking examples are found in the domains of computer supported cooperative work, and distributed computer control. These and other examples exemplified in the literature [14,18,5], require complementing paradigms. One such paradigm gaining increasing acceptance is reliable group communication (multicasting), concerning the dissemination of information to a group of participants in a system. The implementation of this paradigm meets a number of problems, due to natural impairments of the networking machinery: multicasted information can be lost or corrupted; may reach only a subset of the intended recipients; partitions may occur, leaving the recipients isolated, at least temporarily. Even when the information is not lost, it may be delivered in arbitrary order or at an arbitrary time, whereas the user might have expected a given ordering or timeliness. Algorithms and protocols to solve these problems have been presented in the last years. They have been named after the several flavors provided: atomic, ordered, causal, reliable, etc. There are systems built according to these principles, using one or more of these protocols, such as the ISIS group toolbox[5], the group availability services of the IBM AAS[13], the conversational group support of PSYNC [24], the group membership and replication management services of the DELTA-4 distributed fault-tolerant architecture [25]. Under a system perspective, a systematics of group-orientation is developing. It should lead, in a top-down approach, to the definition of group communication and management services in response to pre-defined user requirements: support of application (or problem) classes and group types; required properties of agreement, order and synchronism; naming and addressing; time and value domain correctness (fault-tolerance, real-time). In this paper we present an attempt in that direction, the xAMP, a multi-primitive group communications service. The xAMP is a complete redesign of its predecessor, the AMp [31], used as the communications support of the DELTA-4 system. xAMP is aimed at supporting the development of distributed applications, with different dependability, functionality, and performance requirements. xAMP attributes emerged from the lessons learnt by experimenting with AMp in DELTA-4 in the past few years: versatility; a range of qualities of service; homogeneity: single core for a multi-protocol structure; efficient name-to-address translation support: logical group and sub-group addressing, selective physical addressing; knowledge about group participants: separation of membership management (user-oriented) from monitoring (protocol-oriented). The xAMP implementation consists of an integrated package, designed to be used over broadcast local-area networks. We have reasonable confidence on the utility of the several xAMP functions, thanks to the scrutiny of the several DELTA-4 consortium partners, who have been demanding users of xAMP. The paper is organized as follows: section 2 provides some comparison with related approaches and the following section discusses the requirements for group support. In section 4, the architecture of xAMP is summarized and the protocols used to implement the different qualities of service are presented in Section 5. Section 6 presents the current state of the implementation and provides some concluding remarks. 2 Related Work There are quite a few good algorithms published, providing individual group communication properties, like causal [27,24], total order [22], atomic [9,23,16,12], best-effort [10]. If a group communication support is to be provided, it must supply a range of functionality — let us call it quality of service or QOS, a terminology very used in the communications community — including addressing modes, group management support, and delivery properties. There exist a number of solutions providing varying degrees of order, such as [7,18,14,24]. Our subsystem, besides a range of order properties, from total and causal to FIFO, provides different agreement and synchronism properties, such as best-effort and at-least, or loose and tight synchrony. Their combination, according to user requirements, yields the different qualities of service that xAMP offers. We follow the approach, pioneered by Birman [4], of encapsulating in a group communication subsystem a set of QOSs offered to users, alleviating them from the task of constructing such functionality. The alternative approach of providing a single basic all-purpose primitive, as followed in [24], allows fine-tuning but leaves the responsibility of constructing the necessary --- 1The AMp provided only an atomic multicast service. 2Delta-4 is a consortium sponsored by the CEC Esprit II research programme, formed by Ferranti-CSL (GB), Bull (F), Credit Agricole (F), IEI (I), ITB (D), INESC (P), LAAS (F), LGI (F), MARI (GB), NCSR (GB), Renault (F), SEMA (F), Un. of Newcastle (GB), designing an open, dependable distributed architecture. 3The definition of these and the rationale behind their utility is detailed ahead in the paper. services to the higher levels. We also try to keep the advantages of the latter, by providing a set of primitives, well tested (correctness) and optimized (performance), rich enough to represent most distributed application requirements. With regard to the engineering of these multi-primitive services, these are several approaches possible. The first ISIS protocol suite [4], had two different protocols, which required a third, special protocol to enforce consistency among them. The approach taken by Malley & Peterson [21], consists of providing “micro-protocols” providing basic properties, and interconnecting them procedurally, to obtain a given quality of service. While it is the most versatile idea, it may prove difficult to efficiently implement and combine protocols with individual properties, mainly if not only different order but also agreement and synchronism properties are envisaged. The xAMp approach consists of a core protocol, from which all combinations of properties are derived. Most services follow common execution paths, and then branch to specific terminations. With this approach, code is re-used and structures are shared, it is easier to enforce group monitoring and consistency among the information streams of the several services. In the most recent ISIS group communication subsystem, all order properties are also built on top of a basic causal protocol [7]. Looking at other protocols with more detail: Chang [9] describes an atomic broadcast protocol where requests pass through a centralized token holder. The degree of tolerance of node failures can be parametrized, in a trade-off with efficiency. A significant latency, which is not bounded a priori, may build-up due to the method to tolerate failures of the token site. The work of Navaratnam [23] is inspired on the method of Chang. Garcia-Molina gives a protocol in [14], inspired on Chang’s centralized ordering node, but instead of a node there is a forest of nodes, in a graph-oriented scheme; their method does not fully take advantage of multicasting facilities and efficiency depends on groups being reasonably static. The protocol by Kaashoek [16] is equivalent to the non-fault-tolerant protocol of Chang; it owes its efficiency to this trade-off with dependability, and to the fact that it supposes a bare machine implementation, without the overheads of an operation system. Among the few works that take advantage of the properties of broadcast LANs, such as xAMp, we can cite [9, 8, 22, 16]. 3 Requirements for group support The need for support of group activity is based on the assumption, shown correct by a number of real examples, that in a distributed architecture processes frequently get together to achieve a common goal. The set of such processes can be called a group. A communication service can be said to support groups when it provides services that facilitate the design and the execution of distributed software running on such a group of distributed processes in cooperation, competition or replication. The first services required in a group support service are, naturally, the group membership services. A powerful support for groups should allow the dynamic creation – and reconfiguration – of process groups. During the lifetime of a group, processes may join or leave the group and the communications service should provide primitives to perform these operations. The failure of a group member should also be detected and an indication of the event should be provided to the remaining members. The second goal of a group support service should be to provide an efficient and versatile support for exchange of information between group members. To start with, a multicast communication service should avoid the need to explicitly perform point-to-point transfers to execute a multicast operation. Such a service should accept a list of addresses, what we call a selective address, as a valid destination address for a multicast message and would – transparently – deliver the message to the intended recipients. Additionally, a logical address can be associated Table 1: xAMp Properties. Consistent Group View - Px1 - Each change to group membership is indicated by a message obeying total order, to all correct group participants within a known and bounded time $T_{g}$. Addressing - Px2 - Selective addressing: The recipients of any message are identified by a pair $(g, sl)$, where $g$ is a group identification and $sl$ is a selective address (a list of physical addresses). - Px3 - Logical addressing: For each group $g$ there is a mapping between $g$ and an address $A_{g}$, such that $A_{g}$ allows all correct members of $g$ to be addressed without the knowledge by the sender of their number or physical identification. Validity - Px4 - Non-triviality: Any message delivered, was sent by a correct participant. - Px5 - Accessibility: Any message delivered, was delivered to a participant correct and accessible for that message. - Px6 - Delivery: Any message is delivered, unless the sender fails, or some participant(s) is(are) inaccessible. Synchronism - Px7 - The time between any service invocation and the (eventual) subsequent indication at any recipient ($T_{e}$), as well as the time between any two such (eventual) indications ($T_{i}$), are: - Loose synchronism: $\Delta T_{e}$ and $\Delta T_{i}$ may be not negligible, in relation to $\max T_{e}$. - Tight synchronism: $\Delta T_{e}$ and $\Delta T_{i}$ are negligible, in relation to $\max T_{e}$. Agreement - Px8 - Unanimity: Any message delivered to a participant, is delivered to all correct addressed participants. - Px9 - At-least-N: Any message delivered to a recipient, is delivered to at least $N$ correct recipients. - Px9.1 - At-least-To: Given a subset $P_{t,o}$ of the recipients, any message delivered to a recipient, is delivered to all correct recipients in $P_{t,o}$. - Px10 - Best-effort-N: Any message delivered to a recipient, is delivered to at least $N$ correct recipients, in absence of sender failure. - Px10.1 - Best-effort-To: Given a subset $P_{t,o}$ of the recipients, any message delivered to a recipient, is delivered to all correct recipients in $P_{t,o}$, in absence of sender failure. Order - Px11 - Total order: Any two messages delivered to any correct recipients, are delivered in the same order to those recipients. - Px12 - Causal order: Any two messages, delivered to any correct participants of any group, are delivered in their "precedes" order. - Px13 - FIFO order: If any two messages from the same participant, are delivered to any correct recipient, they are delivered in the order they were sent. with a multicast group, allowing all group members to be addressed through a logical name. This frees the programmer from having to deal explicitly with selective address lists. Note that a logical name can be seen as a pre-defined address list, containing the addresses of all group members, and being constantly updated upon every group change. The third goal of a group support service is to provide an execution environment that applies algorithms to ensure a given set of desirable properties. These properties are summarized in table 1. Validity and synchronism properties (Px4, Px5, Px6 and Px7) are desirable in most communication systems. They usually state that the user can trust the system in the sense that messages are not corrupted, arbitrarily lost or spontaneously generated. Synchronism properties assure that the service is provided within known time bounds. Timely behavior of the protocol is of major relevance in real-time systems. Agreement properties describe when, and to whom, a multicast message must be delivered. The strongest property in this set is unanimity (Px8). Unanimity states that a message, if delivered to a correct participant, will be delivered to all other correct participants despite the occurrence of faults. This may be stronger than usually required. For instance, queries to replicated servers need only reach one of the replicas, since all responses would be the same. Quorum-based protocols are another example where unanimity is not required. This raised the need to provide different agreement properties (Px9 and Px10). Finally, order properties specify which ordering disciplines the protocol should impose on the messages exchange between group members. The stronger property, total order (Px11) assures that the messages are delivered in the same order to different participants. Causal (Px12) and FIFO (Px13) are weaker ordering disciplines that can provide better performance for those applications not requiring total order. Clearly, all these different requirements cannot be provided in an efficient manner by a single communication primitive. That is why a versatile group communication service should be able to provide several qualities of service. 4 Assumptions about the xAMp architecture The algorithms and protocols to implement the services described in the previous section strongly depend on the target architecture. The xAMp follows a low-level approach without compromising openness and portability, by using standard local area networks. LANs have architecture and technology attributes which can be used for improved performance and dependability (eg. broadcast/multicast, bounded number of omission errors, bounded transmission delay). Although designed for LANs, xAMp does not depend on a given local area network in particular. This was achieved by defining an abstract network interface, discussed with detail in [29]. We recapitulate its properties here, in table 2. Having our protocols tuned for LANs does not mean we have overlooked the problem of interconnected networks. We argue that in an interconnected networking scenario protocols can be more efficient if they rely on low-level "local" protocols that recognize important properties of the local networks. Our work has provided efficient solutions for the local scope [29], that we are now extending to interconnected networks [34]. Protocol design assumes that communication components have a fail-silent behavior. When high coverage is required, the use of self-checking components must substantiate this assumption. Tests performed in the Delta-4 project have shown however that coverage of the assumption for off-the-shelf hardware is largely acceptable for applications requiring up to a moderate level of fault-tolerance. The xAMp architecture was designed in order to meet high expectancies with regard to fault-tolerance and real-time. The highly reliable and timely environment yielded by a single LAN used in a closed fashion had also to do with the LAN-based approach taken. We carefully devised a dependability model and established its correctness in [29], for such an environment. The basic protocols of our system, although clock-less (they do not require clocks), are synchronous, in the sense that known and bounded execution times are enforced, using the techniques described in [32]. Our subsystem comprises a global time service, made of approximately synchronized local clocks. Since clock synchronization is a complex issue on its own, it will not be dealt with detail in this paper. The interested reader may refer to [30], where the xAMP clock synchronization service is described in detail. With the help of this time service, a clock-driven protocol is built, exhibiting the tight-synchrony offered by protocols like those in [12,17]. In that sense, our system offers a more complete solution than either the asynchronous systems, or the latter synchronous systems that are only clock-driven\(^4\). 5 Protocols: from unreliable to tight multicast This section describes protocols which combine properties of Table 1 in order to achieve a number of qualities of service. The selection of the latter was driven by user requirements put by diverse classes of distributed applications. These requirements arisen from the literature and largely from the needs of the group replication and membership protocols of Delta-4 architecture. 5.1 The transmission with response procedure The abstract network service, upon which xAMP relies, offers an unreliable multicast service, presenting a set of properties which are most useful to implement reliable multicast primitives. In absence of faults, the broadcast (Pn1) and full duplex (Pn4) properties provide message de- ivered to any processor connected to the network. However, although errors can be considered rare in LANs, the occasional loss of messages – or omissions – cannot be prevented. Thus, the communication service must be able to recover from such errors. In the xAMP, omission errors are detected and recovered using a transmission with response procedure: it uses acknowledg- \(^{4}\)Although our clock-driven solution is not as efficient, partly because these systems use space redundancy, i.e. replicated networks, making a comparison difficult anyway. ments to confirm the reception of the message and detects omission errors based on the bounded omission degree property of the abstract network\textsuperscript{5}. Figure 1: \begin{verbatim} tr-w-resp ((m), ord, send, Mr, Pr, nr) 01 // (m) is a message to be sent (D_{(m)} is the set of recipients). 02 // “ord” is a boolean specifying if network order is relevant. 03 // “send” is a boolean that allows the first transmission to be skipped. 04 // Mr is a bag of responses. 05 // Pr is a set of processors from which a response is expected (usually Pr = D_{(m)}). 06 // nr is the number of responses expected (usually nr = \#Pr). 07 08 retries := 0; 09 do // while 10 if (retries = 0 \lor ord) then Pr := Pr; nr := nr; Mr := 0; fi 11 if (retries > 0 \lor send) then send ((m)); fi 12 timeout := 0; start a timer; // wait responses ... 13 while (nr > 0 \land timeout) do 14 when response \{rm\} received from processor \(p \land p \in Pr\) do 15 add \{rm\} to Mr. nr := nr - 1; remove \(p\) from Pr. od 16 when timer expires do 17 timeout := 1; od 18 od 19 retries := retries + 1; 20 while (retries < MAX \land nr > 0) // do 21 if (nr > 0) then check membership fi \end{verbatim} The tr-w-resp procedure\textsuperscript{6} is depicted in figure 1. It consists of a loop, where the data message is sent over the network and responses are awaited for. The procedure waits during a pre-defined time interval for the responses, which are then inserted in a response bag and exits when the desired number of responses is collected. If some responses are missing, the response bag is re-initialized and the message re-transmitted. The main loop finishes when all the intended responses are received or when a pre-defined retry value is reached. To preserve network order, the procedure re-transmits the message until it is acknowledged by all recipients in a same transmission. When order is not required, the procedure can be optimized by keeping responses in the bag from one re-transmission to the other (response messages are inserted only once in the response bag). For some omission patterns, this would allow the bag to be filled faster. To activate this mode, the flag “ord” must be set to false. Finally, the boolean variable “send” allows the user to specify that the message should be sent over the network on the first cycle of the procedure. This parameter is useful to allow another processors to collect responses and execute the procedure on behalf of the sender without immediately re-transmitting the message (by setting the flag to false). Section 5.3 explains how this feature is used to provide some of xAMP qualities of service. Several transmissions with response can be executing simultaneously, on the same or on different machines. We assume that messages can be uniquely identified. Different re-transmissions of the same message can also be distinguished. It is thus possible to route any response to the \textsuperscript{5}The detailed technique, as well as its advantages over other approaches such as diffusion based masking is discussed in detail in [32]. \textsuperscript{6}It is a modified version of the procedure given in [29]. appropriate \textit{tr-w-\textit{resp}} instantiation (also called an \textit{emitter-machine})\textsuperscript{7}. To make a protocol tolerant to sender crashes, several emitter-machines may be activated concurrently, at recipient sites, for a same message transmission (in this case, responses must be also broadcasted). See \textit{atLeast} agreement for an example. The unique message identification is disseminated with the message within an \textit{xAMP} protocol header common to all \textit{xAMP} frames. The protocol header contains the identification of the sender, the destination selective list, a frame type field and the message identification among other information\textsuperscript{8}. 5.2 Best-effort agreement The \textit{tr-w-\textit{resp}} procedure is used in \textit{xAMP} to provide reliable \textit{frame} delivery\textsuperscript{9}. Activated by the sender of a message, the latter must remain correct during the execution of the protocol, otherwise the number of recipients of the message cannot be determined a priori. A very efficient communication primitive is offered this way by the \textit{xAMP}, under the name of \textit{bestEffort}. From the point of view of the sender, \textit{bestEffort} is just a call to \textit{tr-w-\textit{resp}}. The appropriate choice of \(P_r\) and \(n_r\) allows an early return in case of omissions, when not all the addressed recipients need to receive the message. For instance, when \(n_r = 0\), the procedure immediately exits after sending the message without waiting for replies, being equivalent to \textit{unreliable multicast}. The recipients are only required to provide an acknowledgment to the sender and to discard duplicates. The protocol is depicted in fig. 2. This primitive and the next one are helpful in a number of distributed applications where high-level functionality reduces the order and agreement requirements, but the need for efficient dissemination on group is retained. Figure 2: \begin{verbatim} bestEffort quality of service 01 // sender 02 // \(P_r \in \mathcal{P}_{(m)} \land n_r < \#P_r\) 03 when user requests to send \(\langle m \rangle\) do 04 \quad \text{\textit{tr-w-\textit{resp}} } \langle \langle m \rangle, 0, 1, \mathcal{M}_r, P_r, n_r \rangle\; 05 when message \(\langle m \rangle\) received from processor \(p\) do 06 \quad if (\#\mathcal{M}_r \neq \#\mathcal{P}_{(m)}) then check membership fi 07 \quad od \end{verbatim} 5.3 AtLeast agreement The \textit{bestEffort} quality of service is not able to assure delivery in case of sender failure. In order to provide assured delivery, in the presence of sender failures, what we call \textit{atLeast} quality of service, we make every recipient responsible for the termination of the protocol. In consequence, \textit{tr-w-\textit{resp}} is invoked both at the sender and at the recipients, as depicted in fig. 3. However, to avoid superfluous re-transmissions of the data message, recipients skip the first step of the \textit{tr-w-\textit{resp}} \textsuperscript{7}Since several emitter-machines can run in parallel, the protocol implementation is able to execute several user requests at the same time. However, since a node usually has limited resources (memory and cpu), the implementation may restrict the number of simultaneous transmissions, for instance keeping a fixed size pool of emitter machines. Some qualities of service may impose additional restrictions on parallelism. \textsuperscript{8}The interested reader can refer to [20]. \textsuperscript{9}A frame is a piece of information in transit in the LAN. It may encapsulate a message or protocol control information. procedure, using the “send” boolean parameter (see section 5.1). In the no fault case, the data message will be acknowledged by all intended recipients, these acknowledgments will be seen by all the participants and no retransmission takes place. This algorithm can be improved to avoid multiple retransmissions when a single omission occurs, by making the recipients use slightly different timeout values, and making the protocol refraining from re-sending when a retransmission from other participant is detected before the timeout expires. Figure 3: ``` Figure 3: atLeast/reliable quality of service 01 // sender 02 // \( P_r \in \mathcal{D}_{(m)} \land n_r < \#P_r \) 03 when user requests to send \( \langle m \rangle \) do 04 \( \text{tr-w-resp} \ (\langle m \rangle, 0, 1, \mathcal{M}_r, \mathcal{D}_{(m)}, \#\mathcal{D}_{(m)}); \) 05 06 if \( (#\mathcal{M}_r \neq \#\mathcal{D}_{(m)}) \) then check membership fi 07 od 08 09 // receiver 10 when message \( \langle m \rangle \) received from processor \( p \) do 11 12 send \( \langle ok_m \rangle \); 13 if \( (~\text{accepted}_{(m)}) \) then 14 accepted_{(m)} := 1; 15 \( \text{tr-w-resp} \ (\langle m \rangle, 0, 0, \mathcal{M}_r, \mathcal{D}_{(m)}, \#\mathcal{D}_{(m)}); \) 16 if \( (#\mathcal{M}_r \neq \#\mathcal{D}_{(m)}) \) then check membership fi 17 od ``` As with bestEffort, several agreement variants of atLeast are obtained by an appropriate choice of the \( P_r \) and \( n_r \) parameters. For instance, if \( n_r \) is chosen such that \( n_r < \mathcal{D}_{(m)} \), the primitive will assure that at least \( n_r \) of the addressed processors will receive the frame. This might be satisfactory to implement quorum based protocols. In certain passive or semi-active replication management protocols, one may wish, for performance reasons, that the message reaches all replicas, whereas is mandatory for consistency that it reaches at least the active replica. In this case \( P_r \) is set with the host identification of the active replica. When \( P_r = \mathcal{D}_{(m)} \) (that is, when all group members do receive the message), this primitive is also called reliable multicast. Reliable multicast will be used as the base of two other qualities of service: causal and delta. 5.4 Causal multicast The reliable quality of service does not try to impose any ordering constraints on the messages exchanged. However, in many systems the relative order of messages has a special relevance. A particular example is the FIFO order: if a given processor sends two messages there is a probability of the second message being causally related with the first one. Due to this reason, most point-to-point systems deliver messages in the order they were sent. When the interactions between participants extend across several nodes a similar reasoning can be applied. In effect, causal relations in a distributed system can be subtle and difficult to identify, specially when there are several communication paths between participants, including real world interactions (eg. sensors and actuators). For simplicity, we limit our analysis here to systems where participants only interact through message exchange using xAMP. The protocol itself restrains the sources of causal relations. Generalizing, if a processor sends a message after having received one, there is a potential causal relation [19] between the message sent and the message received. Several authors discussed the advantages of respecting this kind of order in a system [4,24,14]. We look for a protocol that preserves this implementation of causal order, which has also been called **Logical Order**: \( m_2 \) is delivered after \( m_1 \) if: 1) \( m_2 \) is sent after \( m_1 \) by the same processor or 2) \( m_2 \) is delivered to the sender of \( m_2 \) before \( m_2 \) being sent or 3) \( m_1 \rightarrow m_3 \land m_3 \rightarrow m_2 \). Figure 4: --- **causal quality of service** ``` 01 // sender 02 when user requests to send \( m \) do 03 let \( h_m := H_{local} \); 04 xAMP (reliable, \( h_m, m \ )); 05 add \( m \) to \( H_{local} \); 06 od 07 08 // receiver 09 when \( h_m, m \) message received do 10 keep \( m \) until \( h_m \) is stable; od 11 when \( h_m \) becomes stable do 12 add \( h_m \) to \( H_{local} \); add \( m \) to \( H_{local} \); 13 deliver \( m \); delivered\(_m := 1; \) od ``` In order to provide a causal quality of service, in addition to the assured delivery provided by the reliable quality of service, we need to develop a mechanism allowing logical order to be preserved. There are several implementations of logical order: using logical clocks [19], using message serialization [9], exchanging histories [4,24] or the more recent vector clocks [27,6]. In the xAMP, logical order is obtained using causal histories, that is, keeping a record of the messages sent and received and exchanging this information along with the data messages. A causal history is a list of causal pairs \( \langle id_{h_m}, D_{h_m} \rangle \), where \( id_{h_m} \) is the message identifier and \( D_{h_m} \) is the set of message recipients. A message sent through causal quality of service always carries the causal history of its sender. A causal history is updated every time a message is sent or delivered. When a message is sent, its causal pair is added to the sender’s causal history. When a message is delivered, the message’s pair and the associated causal history, \( h_m \), are added to the recipient’s causal history (see fig 4). In order to be delivered, a message must become stable. A given message \( m \) is stable, in a given processor \( k \), as soon as all messages in \( h_m \) have already been delivered. More precisely, and since some messages in \( h_m \) could be not addressed to \( k \), \( m \) becomes stable in \( k \) when, for all precedent messages \( n \), such that \( \langle id_{n}, D_{n} \rangle \in h_m \land p \in D_{n} \), the flag delivered\(_n \) is already true in \( k \). To prevent the infinite growth of causal histories we use the synchronism of the underlying reliable quality of service. Let \( \Delta \) be the maximum execution time for this quality of service. By definition, any message \( m \) becomes stable within \( \Delta \) real time after being sent: thus, \( \Delta \) can be used to periodically remove stable identifiers from causal histories. Our approach is similar to that of PSYNC [24] but extended to cope with non-uniform addressing. Note that while vector clocks seem to be pretty efficient at eliminating unnecessary logical orderings sometimes enforced by the other approaches, we do not use them because our addressing scheme is too flexible to adequately support them: two consecutive messages can be sent to totally disjoint destination sets, thus a single clock is not able to represent all causal relationships. Recent implementations of ISIS [7], suggest extensions to vector clocks for several groups, but these are difficult to implement in a system as ours, where the number of different destination sets can be very large. 5.5 The atomic and tight qualities of service The *atomic* quality of service, in relation to the other qualities of service previously described, introduces the assurance of total order. This can be achieved exploiting the properties of the abstract network: in fact messages are naturally ordered as they cross the LAN media (abstract network property Pn5). However, the occurrence of omission faults, forcing the re-transmission of messages, may disturb this natural serialization. To preserve network order, a mechanism must be implemented to ensure that the messages are delivered to the user respecting the order they have crossed the network and, when a message crosses the network several times, that a unique re-transmission is used to establish this order. This requires extra work both at the sender and at the recipient sides, as described below. In each recipient, is maintained a *reception queue*, where messages are inserted by the order they cross the network. Since at the moment of reception, a recipient as no way to know if the message was also received by the other recipients, the message cannot be delivered immediately to the user. Instead, it is stamped as *unaccepted* and kept in the queue until there is an assurance that it was inserted in the same relative position in all recipient’s queues. If meanwhile, a re-transmission is received, the message is moved to the end of the queue. On its side, the sender invokes *tr-a-resp* activating the “ord” flag, thus requiring the re-transmission of the message until all recipients acknowledge the same retry. When a successful re-transmission is detected, the sender issues an *accept* frame, committing the message. When the accept frame is received, the recipients mark the associated message as *accepted* and deliver it as soon as it reaches the top of the queue. Since the message cannot be delivered until the accept frame is received, the protocol can be further enhanced to tolerate temporarily *inaccessibility* of a recipient, that is, to allow a receiver to discard a incoming message due to a temporary lack of resources like buffer overflow. It that case the recipient should return a negative acknowledge (nok$_m$) to the sender. If, upon collecting all responses, the sender receives some negative acknowledged, it issues a *reject* instead of the *accept*. Upon reception of the reject, all recipients discard the correspondent message. The operation of the protocol is depicted in fig. 5. To save space, the extended *tight* quality of service is presented in the figure: *atomic* service can be obtained simply by removing lines 8, 11, 14 and 18. It consists of a *two-phase accept* protocol that resembles a commit protocol where the *sender* coordinates the protocol: it sends a message, implicitly *querying* about the possibility of its acceptance, to which recipients reply (dissemination phase). In the second phase (decision phase), the sender checks whether *responses* are all affirmative, in which case it issues an *accept* - or *reject*, if otherwise. To ensure the reception of the decision, by all correct recipients, the *accept* and *reject* frames are also sent using the *tr-a-resp* procedure. The two-phase accept protocol has a variant that is also depicted in fig. 5. This variant, known as the *negatively acknowledged accept*, consists in avoiding the second series of acknowledgments for improved performance. In this variant the sender transmits the accept only once and no acknowledgment is generated. If an omission affects the dissemination of the accept message, this will be recovered by the “request-decision” procedure. In the latter scenario, termination of the protocol is delayed but due to the low error rate expected in local area networks, throughput is significantly improved. Since in the two-phase accept the sender coordinates the protocol, some exception mechanism must be implemented to overcome its failure. In the *atomic* quality of service, protocol execution is carried on, in the event of sender failure, by a termination protocol. This termination protocol is executed by an *atomic monitor* function. There is no permanent monitor activity however - so to speak, a monitor only exists when needed. The monitor impersonates the failed sender but never re-transmits a data message on its behalf. It just collects information about the state of the transmission and disseminates an decision (reject or accept) according... Figure 5: two-phase accept (used in atomic and tight qualities of service) ``` 01 // sender 02 03 when user request to send \(\{m\}\) do 04 tr-w-resp \(((m), 1, 1, M_r, D_{(m)} \#D_{(m)})\); 05 06 07 if (\(\forall r \in M_r, r \text{ is of type } \{ok_m, Q\}\) then 08 choose ip_Q; // (Tight only) 09 tr-w-resp \(((acc_m, ip_Q), 0, 1, M_r, D_{(m)} \#D_{(m)})\); // if neg. ack: just send \(\{acc_m\}\) once 10 11 else 12 tr-w-resp \(((re_jm), 0, 1, M_r, D_{(m)} \#D_{(m)})\); 13 sent_{(m)} := 1; 14 15 16 when receives \(\{rd_m\} \land sent_{(m)}\) do 17 send \(\{\{acc_m\}\}; 18 19 01 // receiver 02 03 when message \(\{m\}\) received from processor \(p\) do 04 remove \(\{m\}\) from \(Q\): 05 06 if (I am accessible for \(\{m\}\) ) then 07 add \(\{m\}\) to \(Q\); send \(\{ok_m, Q\}\); start \(\text{wdTimer}_{(m)}\); lock \(Q\); // (Tight only: no message can be consumed) 08 09 else 10 send \(\{\{nok_m\}\}; 11 fi; od 12 13 when message \(\{acc_m, ip_Q\}\) received from processor \(p\) do 14 stop \(\text{wdTimer}_{(m)}\); send \(\{ok_{acc}\}\); accepted_{(m)} := 1; 15 re-order \(Q\); unlock \(Q\); od; // (Tight only) 16 // if neg. ack: no need to send \(\{ok_{acc}\}\) 17 18 when message \(\{re_jm\}\) received from processor \(p\) do 19 stop \(\text{wdTimer}_{(m)}\); send \(\{ok_{rej}\}\); remove \(\{m\}\) from \(Q\); 20 unlock \(Q\); od; // (Tight only) 21 when \(\text{wdTimer}_{(m)}\) expires do 22 tr-w-resp \(((\text{rd}_m), 0, 1, M_r, p, 1)\); 23 if \(\{\{acc_m\}\} \not\in M_r\) then monitor must be called fi od 24 25 when \(\{m\}\) is on top of \(Q\) do 26 deliver \(\{m\}\); od ly. The information required to perform a monitor action on a group is supplied by the active recipients of that group. Once activated, the action of the monitor is structured recursively in two-phased transmissions, like the normal atomic multicast. This solves the problem of monitor failure recovery, introduced by centralizing monitoring functions: if an active monitor fails, it is replaced by another monitor, invoked, in the same way as for a normal transmission, by a recipient that detects the failure. The action starts with an investigation phase, where information about the local contexts of group members is gathered, and ends with a decision, disseminated to those members. The decision contains the new group view, after insertion of new members, or elimination of members leaving or having failed. The recovering algorithm was described in detail in [31] and formally validated [2], so we will omit here the details of the algorithm. For real-time applications, the major disadvantage of the atomic service is its inability to deal with message priorities: since incoming messages are always inserted at the end of the receive queue, a message of high priority can be affected by a possibly long queue delay until delivery. This is clearly incompatible with the real-time requirements for preemption and to respect message priorities (emphasized by the design of the real-time variant of the Delta-4 architecture, also known as XPA [3]). To avoid this problem, the two-phase accept procedure must be extended to allow the negotiation of the final position of a given message: during the dissemination phase the coordinator reads the state of the queue in all recipients, using information inserted in the acknowledgment messages. After, in the decision phase it disseminates an insertion point along with the decision. The method is similar to the algorithm proposed by Sken which inspired the ABCAST protocol [4]. The final position of the message in the queue is chosen by the sender based on the information gathered during the first phase of the protocol. To support lock-step synchronous distributed algorithms (and certain input/output activities in real-time settings) one needs messages to be delivered periodically and simultaneously to every recipient. Protocols that simulate this abstraction try to be steady (display a constant execution time) and tight (deliver a message at the same time) [33]. Besides allowing preemption, the tight quality of service only does a best effort to improve tightness of the protocol. Better can only be done by taking a clock-driven approach. xAMP has the delta quality of service, which provides a global total and causal order. The protocol is not described here due to lack of space, being described in [26]. It is based on the reliable quality of service to assure delivery, and on the xAMP time service, to ensure order. The protocol follows the method established by Lamport [20]. Note that this mechanism does not satisfies the requirement of short preemption latency without extra architectural support. In fact, even if an urgent message is inserted on top of the receive queue, it still has to wait for the processing of the previous message. Suppose that the application is structured as a state-machine [28]. Each message could then be associated with a state-machine command. Since commands must be processed in an atomic manner, a new command can only be processed after completion of the previous command. If a command has a very long execution time a high-priority message can be strongly delayed. This behavior can only be bypassed, if commands are splitted in several sub-commands: high-priority commands could then be inserted between two sub-commands, emulating the preemption of the long envelope command. In order to facilitate this kind of programming, xAMP possess a facility that allows the user to send several messages as a whole. These messages are seen by xAMP as a single message containing several “slots” or pieces. Slotted messages require just one execution of the protocol to be disseminated and the several slots are automatically inserted in the receive queue as individual units as soon as the envelope message is accepted. 5.6 Delta QOS The delta quality of service provides total global order based on virtual synchronized clocks. It can be easily implemented using the reliable multicast service and a clock synchronization service. Before being sent, the message is timestamped with the value of the local virtual clock. Upon reception, messages are ordered by the values of their timestamps. Messages with the same timestamp are ordered using the identification of its sender (we assume that it is possible to establish an order relation between processor identifiers). To assure that timestamp order is not violated, no message can be delivered before the arrival of all messages with a smaller, or same, timestamp. That is, any message must wait a worst-case time $\Delta$ for all the potentially precedent messages. Naturally, this time $\Delta$ is given by the execution time of the reliable multicast QOS plus the maximum desynchronization between virtual clocks $\delta$. This protocol is a variant of that of [11] but using the “tr-w-resp” procedure to avoid massive retransmission. However, our protocol can exhibit higher values for the delay $\Delta$ since acknowledgments are awaited before a message is retransmitted. On the other hand, on absence of faults, our protocol sends the data message only once over the network, thus saving network bandwidth. 5.7 Group Addressing In xAMP, we have not imposed any restriction on the destination sets, $D_m$, of a given message $\langle m \rangle$. In fact, our protocols are quite generic and are able to accept any list of nodes as a destination set. This means that the user is able to address any sub-set of nodes in the system, listing explicitly the desired recipients. This is also called selective addressing. In addition, the user is able to create groups to which a logical address is automatically assigned. When using a logical address, the user relies on the protocol to deliver the message to all group members. That is, the protocol must assure that the *group view*, i.e. the list of members of a given group, is maintained and used in a correct manner. For this reason, all actions that are bound to modify this view must be performed through specialized functions. There are three operations that can be performed on a group: *join*, *leave* (voluntary departure) and *failure* (involuntary departure). The most complex function is the one that performs the join and leave operations. The complexity comes from the use of *logical addresses*: immediately after the end of the operation, the new member must start to receive messages logically addressed to the group\(^1\) (and an old member must stop receiving them). To avoid inconsistent uses of group views, these operations must be executed as an atomic action: in the xAMP, this is obtained making all members of the group inaccessible during the join and leave operations. In the LAN context, these operations are short lasting and bounded in duration. On the other hand, keeping failed members in the group view does not compromise protocol correctness but may imply a performance degradation since messages will always be retransmitted up to MAX retries. It is then desirable to remove failed stations from group views as soon as possible. In the xAMP, failures are detected during message exchange: a sender detects the failure of a recipient during the execution of the *tr-w-resp* procedure; in *atomic* and *tight* qualities of service a recipient may detect the failure of a sender by the absence of a decision frame. Upon detection of a failure, the identity of the failed station is quickly disseminated to all nodes. A special atomic message is sent, on each group whose membership was affected by the failure, providing to the user a failure indication obeying total order. ### 5.8 The cost of xAMP qualities of service The xAMP protocol provides the user with different qualities of service (QOS), as shown in table 3, ranging from unreliable multicast to *atomic* multicast. Different tradeoffs between functionality and performance are provided, assuring only a number of the properties depicted in table 1. To make these tradeoffs clear, we present a short analysis of xAMP performance. Results are summarized in tables 4. The first table presents the number of rounds required to execute the protocol for best and worst case scenarios. This table also presents the number of frames exchanged during protocol execution. These results are functions of the maximum number of faults and of the number of message recipients. The second table presents the best and worst execution time \(- T_e\) of each QOS. These results are functions of \(\Gamma\), the worst case \(^1\)Note that the *tr-w-resp* procedure assures recovery from omission faults relying on the number of acknowledgments collected. This means that if some group member is missing from the group view, message delivery is not assured to it. network delay, and $Tr$, the time required to execute a round, that is, to send a message and collect the correspondent responses. We now deduct and discuss these values. We start by analyzing the tr-w-resp procedure. This procedure executes a given number of transmission with response rounds until the message is delivered with success. In the best case the message is received by all intended recipients on the first transmission, with a worst case delay given by the network transmission delay, $\Gamma$. Note that the sender has an extra delay consisting of the time required to gather the acknowledgments, $Tr^-(d)$. In this case the data message is transmitted once and $d$ responses are collected. When an omissions occur, responses are awaited until the timer expires. There is then a worst case time to execute a round, $Tr^+(d)$ that is roughly given by the value of this timer plus some processing time. If a worst case of $k$ omissions occur, a delay of $kTr^+(d) + \Gamma$ is incurred. In the latter scenario, the data message is transmitted $k + 1$ times and at most $d(k + 1)$ responses are generated. Also note that since the protocol is activated at all participants, the number of crash faults does not increase the number of rounds over the minimum required to mask the network omission degree. Note that although bestEffort, atLeast and causal QOSs share the same values, the processing overhead is significantly different. The procedure tr-w-resp is only required to be executed at the sender for the bestEffort QOS. On the other hand, for atLeast QOS, this procedure must be executed both at the sender and recipients. The causal quality of service incurs in an extra processing overhead related to the update and comparison of causal histories. Similarly, the delta QOS being based on the atLeast service, it has the same costs in terms of traffic generated. However, there is a fixed delay that must be observed before a message can be delivered, thus presenting the poorest best case performance. The atomic QOS involves longer termination times since acknowledgments must be awaited before the decision is disseminated. In the best case, the data message is sent, acknowledgments are gathered and an accept is disseminated (without acknowledgments). There is one round of message exchange (generating one data message and $d$ responses) plus one decision sent. The tight QOS is very similar to atomic except that the decision must be always acknowledged, thus involving the exchange of $d$ responses in an extra round. In both services, the message becomes ready to be delivered as soon as the decision arrives, that is $Tr^-(d) + \Gamma$ after the beginning of the transfer. Worst case values for these qualities of service occur when the sender fails. These scenarios are slightly more difficult to analyze since they involve the execution of the monitor function. The results are shown in the table 4 but, for sake of brevity, the justification is omitted. Numerical results strongly depend on the actual architecture used to support the xAMP <table> <thead> <tr> <th>QOS</th> <th>agreement</th> <th>total order</th> <th>causal</th> <th>queue recd.</th> </tr> </thead> <tbody> <tr> <td>bestEffort N</td> <td>no (best effort N)</td> <td>no</td> <td>FIFO</td> <td>no queue</td> </tr> <tr> <td>bestEffort To</td> <td>no (best effort To)</td> <td>no</td> <td>FIFO</td> <td>no queue</td> </tr> <tr> <td>atLeast N</td> <td>no (assured N)</td> <td>no</td> <td>FIFO</td> <td>no queue</td> </tr> <tr> <td>atLeast To</td> <td>no (assured To)</td> <td>no</td> <td>FIFO</td> <td>no queue</td> </tr> <tr> <td>reliable</td> <td>all</td> <td>no</td> <td>FIFO</td> <td>no queue</td> </tr> <tr> <td>causal</td> <td>all</td> <td>no</td> <td>yes</td> <td>no queue</td> </tr> <tr> <td>atomic</td> <td>all or none</td> <td>yes</td> <td>yes</td> <td>no</td> </tr> <tr> <td>tight</td> <td>all or none</td> <td>yes</td> <td>yes</td> <td>yes</td> </tr> <tr> <td>delta</td> <td>all or none</td> <td>timestamp</td> <td>timestamp</td> <td>timestamp</td> </tr> </tbody> </table> Table 3: The xAMP multi-primitive communication service implementation. We will focus on a port of xAMp that runs as an Unix\textsuperscript{11} device driver. We have made our measurements on a version running on SUN Sparc-Stations 1. Figure 7 presents the results for reliable QOS and fig. 8 for atomic QOS for different message sizes and number of stations. A more complete three-dimensional plot is given in fig. 9. In the fig. 7 a single line is displayed for message indication since, as shown in table 4, the reliable QOS best-case execution time is independent of the number of recipients. As the message is indicated as soon as it is received from the network, the execution time is close to $\Gamma(sz)$ (plus some processing overhead). The confirmation is only provided when all responses are collected, being provided approximately $Tr(d)$ after the indication. Naturally, the atomic QOS is more expensive since it requires all responses to be collected and the dissemination of a decision. Whilst the confirmation is given to the user as soon as the decision is taken, being the time proportional to $\Gamma(sz)+Tr(d)$, the indication is only provided when this decision is received, that is, roughly $\Gamma$ after. ### Table 4: Cost of xAMp’s QOS (in frames and time) <table> <thead> <tr> <th>QOS</th> <th>$T_e$</th> <th>$T_i$</th> </tr> </thead> <tbody> <tr> <td>best Effort</td> <td>$\Gamma(sz)$</td> <td>$kTr(d)+\Gamma(sz)$</td> </tr> <tr> <td>atomic</td> <td>$Tr(d)+\Gamma(sz)$</td> <td>$kTr(d)+Tr(b)+Tr(1)+T_{wd}+Tr(1)+Tr(d)+T_{out}+\Gamma(sz)$</td> </tr> <tr> <td>tight</td> <td>$Tr(d)+\Gamma(sz)$</td> <td>$kTr(d)+T_{wd}(d)+T_{out}+\Gamma(sz)$</td> </tr> </tbody> </table> $Tr(d)$: Execution time of a round (send a message and to collect $d$ responses). $Tr(b)$: Inconsistency time. $\Gamma(sz)$: Abstract Network Transmit Delay (function of message size). $T_{wd}^d$: Wait decision timeout. $T_{out}^d$: Timeout to activate a new monitor upon active monitor failure. 6 Current State and conclusions We have presented the xAMp, a multi-primitive group communications service. The provision of different qualities of service gives the user the possibility of choosing the compromise between performance and reliability that best fits his/her requirements. The xAMp architecture exploits the fail-silent assumption and the properties of local area networks to provide services that are highly efficient on LANS. During the design of xAMp we have traded portability over arbitrary networks by efficiency and timeliness in a local scope. xAMp cannot thus be ported to interconnected networks. Although we are studying that problem currently, it is important to signal that as it is, xAMp is very suitable for dependable real-time applications, often based on LANs. \textsuperscript{11} UNIX is a Registered Trademark of AT&T. Figure 7: Figure 8: Figure 9: 3d plots The xAMP is available as a software component consisting of a highly portable kernel and a set of interfaces to several environments and networks. Integration is a key feature of xAMP engineering. Most qualities of service are implemented as by-products of a basic core of the protocol, sharing data structures, procedures, failure-recovery algorithms and group monitor services. xAMP was easily ported to several LANs and environments, thanks to the decomposition between kernel and abstract network, and a detailed and non-ambiguous specification of all xAMP interfaces. There are currently ports to the ISO 8802 token-ring and token-bus LANs, and an FDDI port is envisaged. The performance of xAMP over each of these LANs has slight differences, depending on technology particularities. Real-time performance, net reliability and built-in fault tolerance, sheer speed and throughput, are examples of factors of choice among them. An experimental Ethernet port has also been made. Coverage of an Ethernet implementation may not be as high as over the other LANs mentioned, but it is perfectly acceptable for some non-real-time business and office segments. The xAMP specification was verified and the implementation validated. The verification tool used was Xesar [15]. The basis for the verification was an Estelle/R formal specification of the original xAMP[2], forming the core of what is xAMP now. However, the verification covered most of the basic procedures, including the atomic monitor and the “tr-with-resp” procedure upon which most QOS are based, thus increasing confidence in the protocol design. The xAMP implementation was also subject to a fault injection campaign, with the help of a specialized tool [1]12. Acknowledgments Most of this work has been developed in the scope of the Delta-4 architecture. We wish to thank M. Baptista, A. Casimiro, M. Chereque, J. Etherington, H. Fonseca, B. Guilmore, R. Ribot, C. Rodriguez, J. Rufino and W. Vogels for their valuable contribution to the evolution and engineering of xAMP. The authors are indebted to several other people in the Delta-4 project for the many criticisms and suggestions made during the design and implementation of xAMP, particularly to P. Bond, D. Powell, J.L. Richier, D. Seaton and J. Voiron. We also thank the fruitfull exchanges of ideas with Ken Birman, Larry Peterson and Rick Schlichting. References 12Both these works were performed collaboratively under the DELTA-4 project framework, respectively by the LGI-Grenoble and LAAS-Toulouse.
{"Source-Url": "http://www.gsd.inesc-id.pt/~ler/reports/xAMp-Report.pdf", "len_cl100k_base": 13085, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 72161, "total-output-tokens": 16160, "length": "2e13", "weborganizer": {"__label__adult": 0.0003116130828857422, "__label__art_design": 0.0004131793975830078, "__label__crime_law": 0.0003027915954589844, "__label__education_jobs": 0.0006785392761230469, "__label__entertainment": 0.0001461505889892578, "__label__fashion_beauty": 0.00015592575073242188, "__label__finance_business": 0.0004036426544189453, "__label__food_dining": 0.00038361549377441406, "__label__games": 0.0007200241088867188, "__label__hardware": 0.0035381317138671875, "__label__health": 0.0005588531494140625, "__label__history": 0.0004906654357910156, "__label__home_hobbies": 0.00011724233627319336, "__label__industrial": 0.0007281303405761719, "__label__literature": 0.00033402442932128906, "__label__politics": 0.0002999305725097656, "__label__religion": 0.0006213188171386719, "__label__science_tech": 0.2783203125, "__label__social_life": 0.00010097026824951172, "__label__software": 0.029327392578125, "__label__software_dev": 0.6806640625, "__label__sports_fitness": 0.0002872943878173828, "__label__transportation": 0.0007290840148925781, "__label__travel": 0.00027441978454589844}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65329, 0.08708]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65329, 0.24492]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65329, 0.89484]], "google_gemma-3-12b-it_contains_pii": [[0, 418, false], [418, 3093, null], [3093, 7144, null], [7144, 11246, null], [11246, 13806, null], [13806, 17958, null], [17958, 20097, null], [20097, 23299, null], [23299, 26954, null], [26954, 30577, null], [30577, 34035, null], [34035, 38506, null], [38506, 42186, null], [42186, 46280, null], [46280, 49272, null], [49272, 53040, null], [53040, 55688, null], [55688, 55709, null], [55709, 55728, null], [55728, 59143, null], [59143, 62194, null], [62194, 65329, null]], "google_gemma-3-12b-it_is_public_document": [[0, 418, false], [418, 3093, null], [3093, 7144, null], [7144, 11246, null], [11246, 13806, null], [13806, 17958, null], [17958, 20097, null], [20097, 23299, null], [23299, 26954, null], [26954, 30577, null], [30577, 34035, null], [34035, 38506, null], [38506, 42186, null], [42186, 46280, null], [46280, 49272, null], [49272, 53040, null], [53040, 55688, null], [55688, 55709, null], [55709, 55728, null], [55728, 59143, null], [59143, 62194, null], [62194, 65329, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65329, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65329, null]], "pdf_page_numbers": [[0, 418, 1], [418, 3093, 2], [3093, 7144, 3], [7144, 11246, 4], [11246, 13806, 5], [13806, 17958, 6], [17958, 20097, 7], [20097, 23299, 8], [23299, 26954, 9], [26954, 30577, 10], [30577, 34035, 11], [34035, 38506, 12], [38506, 42186, 13], [42186, 46280, 14], [46280, 49272, 15], [49272, 53040, 16], [53040, 55688, 17], [55688, 55709, 18], [55709, 55728, 19], [55728, 59143, 20], [59143, 62194, 21], [62194, 65329, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65329, 0.04938]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
c1111a02ebc6f2342b0fda948dcb6298a014ec6e
Data-Parallel Computing Meets STRIPS Erez Karpas, Tomer Sagi, Carmel Domshlak, Avigdor Gal, Avi Mendelson {karpase@tx, stomers7@tx, dcarmel@ie, avigal@ie, avi.mendelson@tce}.technion.ac.il Technion — Israel Institute of Technology Technion City, Haifa 32000, Israel Moshe Tennenholtz mosh@microsoft.com Microsoft Research & Technion Microsoft — Herzliya R&D Center 13 Shenkar St. Gav-Yam, Building No. 5 Herzliya 46275, Israel * Abstract The increased demand for distributed computations on big data has led to solutions such as SCOPE, DryadLINQ, Pig, and Hive, which allow the user to specify queries in an SQL-like language, enriched with sets of user-defined operators. The lack of exact semantics for user-defined operators interferes with the query optimization process, thus putting the burden of suggesting, at least partial, query plans on the user. In an attempt to ease this burden, we propose a formal model that allows for data-parallel program synthesis (DPPS) in a semantically well-defined manner. We show that this model generalizes existing frameworks for data-parallel computation, while providing the flexibility of query plan generation that is currently absent from these frameworks. In particular, we show how existing, off-the-shelf, AI planning tools can be used for solving DPPS tasks. Motivation In the classical approach to data processing, the user of a database management system (DBMS) specifies her data processing goal as a declarative query, and the DBMS automatically generates a query plan, i.e., a data processing program that computes the desired goal (Ullman 1988). The final query plan is typically constructed by first generating some query plan, and then optimizing it locally by applying a set of predefined query-plan rewriting rules (see e.g., Ambite and Knoblock 2001). This high-level workflow is based on the implicit assumption that query plans consist mostly of built-in operators (for example, relational algebra operators). This assumption is needed because the system cannot proactively select user-defined operators for performing intermediate computations, and query plans cannot be optimized around these user-defined operators. With the rapid growth of both public and enterprise data repositories, the area of data processing faces conceptual changes in the way the data is perceived, analyzed, and stored. Classical DBMSs and their various extensions are still a cornerstone of data processing, yet large scale and operator-rich computations involving huge amounts of data are becoming more and more common. This, along with the physical limitations on the computing power and storage that a single machine can provide, has led to an increased interest in computation on highly distributed computing infrastructure, often referred to as “data-parallel computing”. While parallel and distributed database systems (Valduriez 1993; Oszu and Valduriez 2007) can address the scalability issues, they, like traditional DMBSs, lack flexibility in exploiting general user-defined operators. To address this need, several systems for data-parallel computing have been developed; the most well-known of these are probably Map/Reduce (Dean and Ghemawat 2008) and its open-source implementation Hadoop, and Dryad (Isard et al. 2007). These systems are based on low-level query plan programming, which is both error-prone and inhibits mass adoption. However, they allowed for development of higher-level systems, such as SCOPE (Chaiken et al. 2008), DryadLINQ (Isard and Yu 2009), Pig (Olston et al. 2008b), and Hive (Thusoo et al. 2009; 2010), which support combining programming with queries in SQL-like languages. The high-level design of these systems is driven by two primary requirements: operability with arbitrary rich sets of user-defined operators for performing intermediate steps of query plans, and optimization of query plans that takes into account the distributed nature of the underlying computing/storage infrastructure. Unfortunately, supporting unconstrained usage of user-defined operators comes at the expense of moving away from the declarative approach to data processing. First, if the user formulates a query which departs from the built-in operators, then she must provide the system with a base query plan. Second, to allow for further optimizations of the query plan, the user must explicitly instruct the system on what kind of optimizations can be performed around the non-standard operators. *The last four authors appear in alphabetical order. Copyright © 2013, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved. Both these requirements take the user away from the goal-driven, declarative data processing paradigm, and currently there is no agreement on what is the right balance between expressivity of the system and the burden to be put on the user: Some systems restrict user-defined operators to be of certain allowed types (Cohen 2006; Chaiken et al. 2008; Thusoo et al. 2010), and others do not optimize user-specified query plans (Olstone et al. 2008b). The reason that general user-defined operators pose such a challenge is that the system is uncertain about “what they need” and/or “what their effects are”\footnote{we refer to this as “promiscuity” of user-defined operators.}. While the semantics of the built-in operators is “hard-coded” in the design of the data-parallel computing systems, user-defined operators typically come with no semantics attached to them. Although some systems attempt to perform automated analysis of the user’s code (Isard and Yu 2009; Cafarella and Ré 2010), these analyses are (and must be) limited, and can not be used to “reverse engineer” the semantics of a given user-defined function. For example, say an analyst wants two histograms of the users of a large social network, one by age and one by relationship status. The optimal query plan for that request will consist of a single table scan and two counts, but allowing the system to come up with such a plan is tricky. In an SQL-style query language, this request will have to be formulated as two queries, both expressed as aggregations over the users table, and planning for these queries in separation will result in a sub-optimal plan with two scans of the same table. However, suppose that somebody already implemented a “double aggregation” operator, which performs two aggregations over the same table, while only scanning it once. Although that operator can, in principle, be used for devising an optimal query plan, the system does not know what this operator can be used for and how, since this operator is user-defined. Hence, our analyst either settles for a sub-optimal query plan, or, if she is somehow aware of this “double aggregation” operator, then she can provide the system with a base query plan that uses this operator. While this is a simplistic example, it is meant for illustration purposes only; more complex example, e.g. planning for image荆。Query for a person: “double aggregation” operator, then she can provide the system with a base query plan that uses this operator. While this is a simplistic example, it is meant for illustration purposes only; more complex example, e.g. planning for image **Model** Our formalism of data-parallel program synthesis is tailored to tracking computation-specific data chunks. Each such data chunk represents some information which can be either generated or required as input by a computation primitive. For example, a data chunk can represent all records of males between the ages of 18–49, or the average salary of all males between the ages of 18–49, etc. Note that the actual value of the average salary, does not need to be known in advance; the fact that it is possible to compute this average given the input records suffices. Formally, a data-parallel program synthesis (DPPS) task consists of: - $D$ — a set of possible data chunks. Each data chunk $d$ is associated with the amount $\sigma_d$ of memory it requires (given, for example, in MB). $D$ may be given explicitly or described implicitly, in which case it could even be infinite. - $N$ — a finite set of computing units, or processors. Each processor $n$ is associated with the maximum total size $\kappa_n$ of the data chunks it can hold and access efficiently. - $A$ — a set of possible computation primitives. Each such primitive is a triplet $a = (I, O, C)$, where $I \subseteq D$ is the required input, $O \subseteq D$ is the produced output, and $C : N \rightarrow \mathbb{R}^{0+}$ is a function describing the cost of the computation on each processor. Similarly to the possible data chunks, the set of computation primitives can also be given explicitly or implicitly, and in the latter case, the set $A$ could be infinite. - $T : N \times D \times N \rightarrow \mathbb{R}^{0+}$ — the data transmission cost function. $T(n_1, d, n_2)$ is the cost of transmitting data chunk $d$ from processor $n_1$ to processor $n_2$. - $s_0$ — the initial state of the computation. - $G$ — the goal of the computation. A state of the computation describes which chunks of data each processor currently holds. Formally, a state $s : N \rightarrow 2^D$ maps each processor to the set of data chunks it holds. For the sake of convenience, we define the free memory capacity of processor $n$ at state $s$ by $f(n, s) := \kappa_n - \sum_{d \in s(n)} \sigma_d$. The goal is also a function $G : N \rightarrow 2^D$ that maps processors to data chunk sets, describing which chunks of data should be stored at each processor at the end of the computation. of the computation. A state \( s \) satisfies the goal \( G \) iff each processor holds all the data chunks specified by the goal, that is, \( G(n) \subseteq s(n) \) for all \( n \in N \). The semantics of a DPPS task is as follows. A computation primitive \( a = (I, O, C) \) can run on processor \( n \in N \) at state \( s \) iff \( n \) holds all the input data chunks and has enough free memory to hold the output, that is, \( I \subseteq s(n) \) and \( \sum_{d \in O} \sigma_d \leq f(n, s) \). Performing \( a \) then generates the output data chunks at \( n \), while incurring a cost of \( C(n) \). That is, if \( s[a_n] \) is the state resulting from applying \( a \) at processor \( n \) in state \( s \), then \( s[a_n](n) = s(n) \cup O, \text{ceteris paribus} \). Considering the connectivity, a processor \( n_1 \) holding data chunk \( d \) can transmit it to processor \( n_2 \) iff the receiving processor \( n_2 \) has enough free capacity, that is, if \( \sigma_d \leq f(n_2, s) \). The transmission then incurs a cost of \( T(n_1, d, n_2) \). That is, if \( s[a_1, a_2] \) is the state resulting from this transmission, then \( s[a_1, a_2](n_2) = s(n_2) \cup \{d\}, \text{ceteris paribus} \). Finally, it is always possible for processor \( n \) to delete some data chunk \( d \). The respective action \( \text{del}(n, d) \) incurs no cost, and \( s[\text{del}(n, d)](n) = s(n) \setminus \{d\}, \text{ceteris paribus} \). Given a DPPS task \( (D, N, A, T, s_0, G) \), its set of operators is \( O := A \cup \{\text{del}(n_1, d, n_2) \mid (n_1, d, n_2) \in D, \text{ and } O \text{ is polynomial in } |D|, |A|, \text{ and } |N|\} \). A sequence of operator instances \( \tau = (a_1 \ldots a_m) \) is applicable if \( a_i \) is applicable in \( s_{i-1} \), where \( s_0 \) is the initial state, and \( s_i := s_{i-1}[a_i] \). The cost of \( \tau \) is the sum of its operator costs, and it is a solution to the task if \( s_m \) satisfies the goal, that is, if \( G(n) \subseteq s_m(n) \) for all \( n \in N \). Finally, \( \tau \) is optimal if its cost is minimal among all the solutions to the task. ### Expressivity and Complexity We now turn to consider the expressivity and computational complexity of DPPS. Despite the fact that the syntax of DPPS seems very different from SQL-style database query languages, we begin by showing that DPPS is strictly more expressive than relational algebra (Codd 1970) extended with aggregate functions (Klug 1982). Although there is a shift in data-parallel computing systems towards the NoSQL paradigm (Padhy, Patra, and Satapathy 2011), relational algebra is still a popular formalism for describing queries. Furthermore, modern data-parallel processing systems can impose an “ad-hoc” schema on non-relational data, thus making relational algebra still relevant to these systems. It is also worth noting at this point that DPPS is not limited to relational algebra, and can be used with other data models, that may be more suited to the NoSQL paradigm. Presenting a complete description of relational algebra here is infeasible, but we will briefly mention that a relational algebra query can be represented as a tree, whose leaves are either constants or relations (i.e., database tables), and whose inner nodes are projection (\( \pi \)), selection (\( \sigma \)), or aggregation (\( A \)) of a single child node, or cross product (\( \times \)), set union (\( U \)) or set difference (\( \setminus \)) of two child nodes. Similarly to the alias mechanism of SQL, we assume wlog that each occurrence of a relation name in that tree is assigned a distinct name. **Theorem 1** Given an extended relational algebra query \( \Phi \), we can efficiently construct a solvable DPPS task \( \Pi \) such that any solution \( \pi \) for \( \Pi \) induces a valid query plan \( \varphi \) for \( \Phi \). The detailed proofs of the formal claims are relegated to a technical report. However, we attempt to provide the main ideas behind the proofs, especially if these carry some helpful insights. For the proof of Theorem 1, we construct a DPPS task \( \Pi \) with a single processor \( n \). The possible data chunks \( D \) of \( \Pi \) are all subexpressions of \( \Phi \), and they are constructed from the tree representation of \( \Phi \) by taking, for each node \( e \) in \( \Phi \), the subtree rooted at \( e \), that is, \( D := \{\text{subtree}(e) \mid e \in \Phi \} \). As the data chunks are represented in \( \Phi \) explicitly, their number is linear in the size of \( \Phi \). The computation primitives \( A \) of \( \Pi \) correspond to the internal nodes of \( \Phi \); for each internal node \( e \) we construct a primitive \( a_e \), whose inputs are the children of \( e \) and whose output is \( e \), with a cost that reflects the (typically estimated) execution cost of the computation. In the initial state of \( \Pi \), the single processor \( n \) contains the data chunks corresponding to all leaf nodes, and the goal is for \( n \) to hold the data chunk corresponding to \( \Phi \). Since there is only one processor in \( \Pi \), the transmission cost function \( T \) is irrelevant. It is easy to see that all solutions to \( \Pi \) contain the same computations, one for each internal node in \( \Phi \) (the only choice the solver makes is the order in which these computations are performed), and all these solutions of \( \Pi \) correspond to query plans for \( \Phi \). This proof of Theorem 1 is simple, but its construction restricts the scope of feasible query plans. However, there is a simple fix for that, which we describe here in informal terms. First, we add computation primitives corresponding to relational algebra equivalence rules, with a cost of 0. For example, the equivalence rule \( \sigma_d(\sigma_p(X)) = \sigma_p(\sigma_d(X)) \) induces the computation which takes as input a data chunk of the form \( \sigma_p(\sigma_d(X)) \), and produces as output the data chunk \( \sigma_d(\sigma_p(X)) \). Of course, this means we must extend the set of possible data chunks \( D \) to include all possible expressions derived in this manner. However, as noted earlier, we do not have to explicitly enumerate all possible data chunks, but can define them implicitly, by the base relations and the relational algebra operators. Additionally, we can add operators corresponding to “macros”, when these computations can be more efficiently executed. For example, joins can be expressed as selection over a cross-product of two relations, but are usually much more efficient to execute than first performing a cross-product, and then a selection over its output. These allow the solver to find a more efficient plan, while using the equivalence computations to prove that the result is equivalent to the original query. In essence, this finalizes an encoding of (non-distributed) query optimization as a DPPS task. However, our main motivation comes from the need to perform distributed computations. While the basic ideas described above still work, there are several more issues that need to be dealt with. First, a typical distributed database will fragment tables, and thus processors will no longer contain only base relations (and constants) in the initial state, but will usually contain some table fragments, which can be expressed as selection and/or projection of the base relations. For example, assuming we have two processors which store some table \( T \) by hash-partitioning on field \( f \), in the initial state processor \( n_0 \) will hold \( \sigma_{\text{hash}(f)=0}(T) \) and processor \( n_1 \) will hold \( \sigma_{\text{hash}(f)=1}(T) \). We must also make sure to include equivalence rules which allow us to merge such data chunks, i.e., the equivalence rule \( \sigma_{\text{true}}(X) = X \), along with \( \sigma_0(X) \cup \sigma_1(X) = \sigma_{\text{true}}(X) \), and the fact that \( (\text{hash}(f) = 1) \vee (\text{hash}(f) = 0) \equiv \text{true} \), which is easily extended for any partition. Finally, we remark that a computing cluster, as well as the data stored and accessible on that cluster, are resources that are usually shared between multiple users, and thus must be used to satisfy all users’ needs. It has already been noted that the overall system performance could benefit from sharing computations between different queries. For example, the Comet system (He et al. 2010) constructs an execution plan for a series of DryadLINQ queries, and cross program optimization between Pig Latin programs is described by Olston et al. (2008a). With the DPPS formalism, multi-goal optimization is trivial, as the formalism already supports the specification of multiple goals. Given the expressivity of DPPS, it is not surprising that optimal data-parallel program synthesis is computationally hard. More importantly, it turns out that this problem remains NP-hard even under severe restrictions, because the computational hardness of DPPS stems from numerous sources. In fact, it turns out that even the satisficing variant of this problem is NP-hard. **Theorem 2** Satisficing data-parallel program synthesis is NP-hard, even when the possible data chunks are given explicitly. The proof of Theorem 2 is by reduction from 3SAT; the induced DPPS tasks have processors with only a fixed memory capacity, and the solution must carefully manage the memory resource. The following two theorems show that optimal data-parallel program synthesis is NP-hard even under severe restrictions. **Theorem 3** Optimal data-parallel program synthesis with a single processor is NP-hard, even if the possible data chunks are given explicitly, and there are no memory constraints. **Theorem 4** Optimal data-parallel program synthesis with a single data chunk is NP-hard. The proof of Theorem 3 is by polynomial reduction from optimal delete-free STRIPS planning (Bylander 1994), while the proof of Theorem 4 is by polynomial reduction from the minimum Steiner tree in a graph problem (Karp 1972; Garey and Johnson 1979). These proofs capture two sources of complexity of DPPS: The complexity in Theorem 3 stems from the number of possible data chunks, while Theorem 4 captures the complexity which stems from the network structure. Although both optimal and satisficing DPPS are worst-case intractable in many cases, that does not mean that none of their practically relevant fragments is polynomial time. While much further investigation is needed here, Theorem 5 already captures one such fragment of tractability, which in particular contains our multiple histogram running example. **Theorem 5** Satisficing data-parallel program synthesis with no memory constraints can be solved in polynomial time, when the possible data chunks are given explicitly. The proof is by polynomial reduction to satisficing delete-free STRIPS planning, which is known to be polynomial-time solvable (Bylander 1994). **DPPS meets STRIPS** The worst case hardness of data-parallel program synthesis is clearly not something to be ignored, but obviously it does not imply that solving practical problems of interest is out of reach. The two options here would be either to develop a special-purpose solver for DPPS, or to compile DPPS tasks into a canonical combinatorial search problem, and use an off-the-shelf solver for the latter. The key advantage of the second option, which we consider here, is that using off-the-shelf solvers only requires modeling the problem of interest in the right formalism, without programming. Furthermore, these solvers tend to be better engineered and more robust than special-purpose solvers for niche problems. Since DPPS is all about synthesizing (possibly partially ordered) goal-achieving sequences of actions, a natural such target of compilation for DPPS would be this or another standard formalism for deterministic planning, with STRIPS probably being the most canonical such formalism (Fikes and Nilsson 1971). A STRIPS planning task with action costs is a 5-tuple \( \Pi = \langle P, O, C, s_0, G \rangle \), where \( P \) is a set of propositions, \( O \) is a set of actions, each of which is a triple \( o = \langle \text{pre}(o), \text{add}(o), \text{del}(o) \rangle, C : O \rightarrow \mathbb{R}^{\geq} \) is the action cost function, \( s_0 \subseteq P \) is the initial state, and \( G \subseteq P \) is the goal. An action \( o \) is applicable in state \( s \) if \( \text{pre}(o) \subseteq s \), and if applied in \( s \), results in the state \( s' = (s \setminus \text{del}(o)) \cup \text{add}(o) \). A sequence of actions \( \langle o_0, o_1, \ldots, o_n \rangle \) is applicable in state \( s_0 \) if \( o_0 \) is applicable in \( s_0 \) and results in state \( s_1 \), \( o_1 \) is applicable in \( s_1 \) and results in state \( s_2 \), and so on. The cost of an action sequence is the sum of the action costs, \( \sum_{i=0}^{n} C(o_i) \). The state resulting from applying action sequence \( \pi \) in state \( s \) is denoted by \( s[\pi] \). An action sequence \( \langle o_0, o_1, \ldots, o_n \rangle \) is a plan for \( \Pi \) if \( G \subseteq \langle o_0, o_1, \ldots, o_n \rangle \), and it is an optimal plan if no cheaper plan exists. We begin by describing a straightforward compilation of DPPS tasks with explicitly specified possible data chunks and no memory constraints. Given a DPPS task with processors \( N \), data chunks \( D \), computations \( A \), and without memory constraints, we show how to construct a STRIPS task \( \Pi = \langle P, O, C, s_0, G \rangle \), such that there is a cost-preserving one-to-one correspondence between solutions for \( \Pi \) and solutions to the DPPS task. The propositions are \( P = \{ \text{holds}(n, d) \mid n \in N, d \in D \} \), and they describe which processor currently holds which data chunk. The operators of \( \Pi \) are given by \( O = \{ \text{transmit}(n_1, d, n_2), \text{delete}(n_1, d), \text{compute}(n_1, a) \mid n_1, n_2 \in N, d \in D, a \in A \} \). The transmit operators are described by \( \text{transmit}(n_1, d, n_2) = \langle \{\text{holds}(n_1, d), \{\text{holds}(n_2, d)\}, \emptyset \rangle \), with a cost of \( T(n_1, d, n_2) \). The delete operators are described by \( \text{delete}(n_1, d) = \langle \{\text{holds}(n_1, d)\}, \emptyset, \{\text{holds}(n_1, d)\} \rangle \), with a cost of 0. Finally, for \( a = (I, O, C) \), the compute operators are described by \( \text{compute}(n_1, a) = \langle \{\text{holds}(n_1, d) \mid d \in I\}, \{\text{holds}(n_1, d) \mid d \in O\}, \emptyset \rangle \), with a cost of \( C(n_1) \). It is not hard to verify that any solution for the planning task \( \Pi \) is also a solution to the given DPPS task, and that the cost of the solutions is the same. Note that the only reason we require that the DPPS task does not have any memory constraints is that STRIPS does not support numerical variables, and so we cannot formulate the requirement that a processor has enough free memory to store any data chunks it computes or receives by transmission. However, numerical extensions to STRIPS do exist (Fox and Long 2003), and we can use such a numerical planning framework to also pose memory capacity constraints. The more substantial issue remains the restriction to explicitly specified possible data chunks, because typically we should expect the data chunks to be specified only implicitly. This issue has been addressed already in the planning literature, and in particular, in the context of the DPADL action language (Golden 2002). Here, however, we are interested in using off-the-shelf STRIPS planners, and fortunately, this issue can be overcome in STRIPS as well. Below we demonstrate this by describing the DPPS task that encodes relational algebra query optimization, which was described informally in the previous section. The propositions we use here encode a relational algebra expression tree. The objects, i.e., the parameters to predicates and operators, correspond to nodes in a relational algebra expression tree, as well as selection predicates, field lists, and aggregation functions. We do not give a full description here for the sake of brevity, but illustrate the main points. First, we have propositions that describe the structure of the relational algebra expression tree. Each node in the tree has an expression type, and each type of node has either one subexpression (select, project, and aggregation) or two subexpressions (cross-product, union, and difference). Additionally, select nodes also have a selection predicate, project nodes have a list of fields, and aggregation nodes have a list of fields and an aggregation function. For example, the proposition \( \text{select}(e, p, e_1) \), indicates that node \( e \) is the result of applying selection predicate \( p \) on node \( e_1 \), or in relational algebra notation \( e = \sigma_p(e_1) \). Another example is \( \text{crossproduct}(e, e_1, e_2) \) which indicates that \( e \) is the result of the cross-product of \( e_1 \) and \( e_2 \), or \( e = e_1 \times e_2 \). The full list of predicates, with their interpretation in relational algebra, is given in Table 1. A second type of proposition we use is \( \text{clear}(e) \), which indicates that \( e \) has not already been set, that is, that \( e \) is currently not part of any relational algebra expression. Additionally, the proposition \( \text{equiv}(e_1, e_2) \) indicates that the expressions represented by \( e_1 \) and \( e_2 \) are equivalent. Finally, as in the previous construction, we have the proposition <table> <thead> <tr> <th>Proposition</th> <th>RA Interpretation</th> </tr> </thead> <tbody> <tr> <td>( \text{select}(e, p, e_1) )</td> <td>( e = \sigma_p(e_1) )</td> </tr> <tr> <td>( \text{project}(e, f, e_1) )</td> <td>( e = \pi_f(e_1) )</td> </tr> <tr> <td>( \text{aggregate}(e, f, a, e_1) )</td> <td>( e = A_{f,a}(e_1) )</td> </tr> <tr> <td>( \text{crossproduct}(e, e_1, e_2) )</td> <td>( e = e_1 \times e_2 )</td> </tr> <tr> <td>( \text{union}(e, e_1, e_2) )</td> <td>( e = e_1 \cup e_2 )</td> </tr> <tr> <td>( \text{diff}(e, e_1, e_2) )</td> <td>( e = e_1 \setminus e_2 )</td> </tr> </tbody> </table> Table 1: STRIPS propositions and their interpretation in relational algebra \( \text{holds}(n, d) \) for every \( n \in N \) and every relational algebra expression node \( d \), which indicates that processor \( n \) hold the data chunk \( d \). We also construct three types of operators. First, we have the transmission and deletion operators, which are part of any DPPS task, as described previously. As before, the cost of the transmission operators stems from the DPPS task’s transmission cost function, and the cost of the delete operators is 0. Second, we have operators corresponding to actual relational algebra operations: selection, projection, aggregation, cross-product, union, and difference. The cost of these operators reflects the estimated execution cost. Each such operator is also parametrized by which processor performs the computation. For example, the operator \( \text{doselect}(n, e_1, p, e_2) \) requires \( \text{clear}(e_1) \) and \( \text{holds}(n, e_2) \), adds the propositions \( \text{select}(e_1, p, e_2) \) and \( \text{holds}(n, e_1) \), and deletes \( \text{clear}(e_1) \). As before, operators representing relational algebra “macros”, such as join, can also be encoded. For example, an operator joining relations \( e_1 \) and \( e_2 \) over predicate \( p \) creates two nodes: one representing \( e_1 \times e_2 \), and another representing \( \sigma_p(e_1 \times e_2) \). However, the cost of this operator is the cost of performing the join, rather than the cost of doing the full cross-product and selection. Finally, we have operators corresponding to equivalence rules. The cost of these operators is zero, because, as noted earlier, they are not actual computations, but are rather used by the solver to “prove” that its solution is correct. These operators are all encoded so that they construct a new relational algebra expression node — the “input” is never modified — and add the proposition that indicates that these expression are equivalent. For example, the commutativity of selection, that is, equivalence rule \( \sigma_p(\sigma_q(X)) = \sigma_q(\sigma_p(X)) \) is encoded by the operator described here: \[ \text{commute-select}(\text{SpSqX}, \text{X}, \text{SpSqX}, \text{SpX}) \] We can always encode these equivalence rules as operators with a fixed number of parameters, because the equivalence rules are local in nature, in the sense that they do not look deeper than one or two levels into the input relational algebra expression tree. One important point to note is that STRIPS does not support the creation of new objects. There- Therefore, the number of relational algebra expression nodes that are used in the solution must be set before planning begins. However, an algorithm which iteratively increases the number of nodes can be used to overcome this limitation. **Theorem 6** Let \( \Pi \) be a DPPS task, and \( k \in \mathbb{N} \). There exists a STRIPS task \( \Pi' \) that can be constructed from \( \Pi \) in time polynomial in \( |\Pi| \) and \( k \), such that, if \( \Pi \) has a solution consisting of at most \( k \) operators, then - (i) \( \Pi' \) is solvable, and - (ii) every plan \( \pi' \) for \( \Pi' \) induces a solution \( \pi \) for \( \Pi \) of the same cost. In order to provide some empirical evidence that our approach is practical, we have encoded a DPPS task which describes our running example of multiple histogram computations over a single table, as a planning task. We assume the table is partitioned by its primary key across a cluster with \( n \) processors, and that the user wants a histogram of the table by \( f \) different fields. The computational operators here are \( \text{count}(d_i) \) which generates \( f \) partial histograms, one by each field, from table fragment \( d_i \), and \( \text{merge}(h_i) \) which requires all \( n \) partial histograms according to field \( i \), and merges them into a histogram of the full table by field \( i \). We did not model memory constraints, and so this domain is delete-free. We varied \( n \) from 2 to 64 and \( f \) from 2 to 6. ![Figure 1: Runtime for obtaining \( f \) histograms over different fields from the same table, fragmented across \( n \) processors](image) Solved these planning tasks with the Fast Downward planner (Helmert 2006) using greedy best first search with the FF heuristic (Hoffmann and Nebel 2001). Figure 1 shows the total planning time for these different values, with a separate line for each value of \( f \). The hardest planning problem, of computing a histogram by \( 6 \) different fields across a cluster with 64 processors, was still solved in under a minute. Note that we did not implement the operators \text{execution}, and so we can not compare to actual distributed DBMS solutions, for which we can only measure query execution time. Finally, we remark that although the planner we used does not guarantee optimal solutions, in this case all the solutions that were obtained were optimal, and scanned the table only once. **Summary** We have described a formal model of data-parallel program synthesis, DPPS, which generalizes the specific data processing systems developed in the area of data-parallel computing. The key advantage of working at the level of the DPPS model is that it is easy to separate the modeling of the domain (i.e., the data chunks and computations) from the specific task (i.e., the network topology, current state of the system, and current queries of the users). The domain modeling could be done by the software development team, by symbolically annotating user-defined operators. The specific task is generated when a query is given to the system by a user, who need not be aware of implementation techniques or network topology. This allows the software development team to focus on optimizing individual low-level functions, while the system automatically generates an optimized query plan. DPPS is more expressive than relational algebra with aggregate functions, allowing for both taking into account the distributed nature of data-parallel computing, as well as incorporating arbitrary user-defined operators of the form supported by the current data-parallel computing systems. The expressivity of DPPS makes reasoning in it computationally hard in the worst case, and we discuss various sources of this time complexity. Beyond the worst-case complexity analysis, we showed how DPPS tasks can be compiled into STRIPS, a standard formalism for deterministic action planning. Using a canonical histogram computation example, we demonstrated how one can use off-the-shelf STRIPS planning tools to solve such DPPS tasks. In terms of future work, examining the relationship between traditional measures for query complexity from the DB community, and the complexity of the corresponding planning task could lead to cross-fertilization between the fields. Additionally, studying the relationship between tractable fragments of DPPS and tractable fragments of planning could also lead to some interesting results. **Acknowledgements** This work was carried out in and supported by the Technion-Microsoft Electronic-Commerce Research Center. We thank the anonymous reviewers for their helpful comments. References
{"Source-Url": "https://www.aaai.org/ocs/index.php/AAAI/AAAI13/paper/viewFile/6289/7192", "len_cl100k_base": 8545, "olmocr-version": "0.1.51", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 28905, "total-output-tokens": 10570, "length": "2e13", "weborganizer": {"__label__adult": 0.0003261566162109375, "__label__art_design": 0.0003914833068847656, "__label__crime_law": 0.0004210472106933594, "__label__education_jobs": 0.0011959075927734375, "__label__entertainment": 0.0001036524772644043, "__label__fashion_beauty": 0.0001908540725708008, "__label__finance_business": 0.0005526542663574219, "__label__food_dining": 0.0004563331604003906, "__label__games": 0.0005640983581542969, "__label__hardware": 0.0014696121215820312, "__label__health": 0.0007414817810058594, "__label__history": 0.0004012584686279297, "__label__home_hobbies": 0.0001589059829711914, "__label__industrial": 0.0008006095886230469, "__label__literature": 0.0003962516784667969, "__label__politics": 0.0003767013549804687, "__label__religion": 0.0005645751953125, "__label__science_tech": 0.256103515625, "__label__social_life": 0.0001156330108642578, "__label__software": 0.0175018310546875, "__label__software_dev": 0.7158203125, "__label__sports_fitness": 0.0002753734588623047, "__label__transportation": 0.0008025169372558594, "__label__travel": 0.00025463104248046875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38925, 0.02968]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38925, 0.48272]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38925, 0.8836]], "google_gemma-3-12b-it_contains_pii": [[0, 4644, false], [4644, 9622, null], [9622, 16920, null], [16920, 23466, null], [23466, 30326, null], [30326, 34980, null], [34980, 38925, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4644, true], [4644, 9622, null], [9622, 16920, null], [16920, 23466, null], [23466, 30326, null], [30326, 34980, null], [34980, 38925, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38925, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38925, null]], "pdf_page_numbers": [[0, 4644, 1], [4644, 9622, 2], [9622, 16920, 3], [16920, 23466, 4], [23466, 30326, 5], [30326, 34980, 6], [34980, 38925, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38925, 0.06557]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
2332164c40ebf146439ecec27cbf25ccea5ed848
Modular Coordination of Multiple Autonomic Managers Gwenaël Delaval, Soguy Mak-Karé Gueye, Eric Rutten, Noël De Palma To cite this version: HAL Id: hal-01006106 https://hal.archives-ouvertes.fr/hal-01006106 Submitted on 13 Jun 2014 HAL is a multi-disciplinary open access archive for the deposit and dissemination of scientific research documents, whether they are published or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Modular Coordination of Multiple Autonomic Managers Gwenaël Delaval Univ. Grenoble Alpes, LIG Grenoble, France gwenael.delaval@inria.fr Soguy Mak-Karé Gueye Univ. Grenoble Alpes, LIG Grenoble, France soguy-mak-kare.gueye@inria.fr Eric Rutten INRIA Grenoble, France eric.rutten@inria.fr Noël De Palma Univ. Grenoble Alpes, LIG Grenoble, France ozel.depalma@imag.fr ABSTRACT Complex computing systems are increasingly self-adaptive, with an autonomic computing approach for their administration. Real systems require the co-existence of multiple autonomic management loops, each complex to design. However their uncoordinated co-existence leads to performance degradation and possibly to inconsistency. There is a need for methodological supports facilitating the coordination of multiple autonomic managers. In this paper we propose a method focusing on the discrete control of the interactions of managers. We follow a component-based approach and explore modular discrete control, allowing to break down the combinatorial complexity inherent to the state-space exploration technique. This improves scalability of the approach and allows constructing a hierarchical control. It also allows re-using complex managers in different contexts without modifying their control specifications. We build a component-based coordination of managers, with introspection, adaptivity and reconfiguration. We validate our method on a multiple-loop multi-tier system. Categories and Subject Descriptors Keywords Autonomic computing; Component dynamic adaptation; Automated management; Control loops; Formal methods; Self-adaptive systems; Software reuse *This research is partly supported by the FSN Datalyse project and ANR INFRA (ANR-11-INFRA 012 11) under a grant for the project Ctrl-Green. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from Permissions@acm.org. Copyright 2014 ACM 978-1-4503-2577-6/14/06 $15.00. http://dx.doi.org/10.1145/2602458.2602463. 1. INTRODUCTION 1.1 Context Complex computing systems are increasingly designed to be self-adaptive, and therefore adopt the autonomic computing approach for the management of their administration [15]. Computing infrastructures are equipped with Autonomic Managers (AM), where monitors or sensors gather relevant information on the state and events of the Managed Elements (ME). Execution of administration actions, offered by the system API, implements regulation of the ME’s activities. In between, the loop is closed by a decision component. An AM is a component that continuously reacts to flows of input information by flows of output actions, it can therefore be considered as a reactive system [11]. Self-management issues include self-configuration, self-optimization, self-healing (fault tolerance and repair), and self-protection. Typical examples are found in data-centers, with managers for resources, dependability, and energetic efficiency, as we consider in the Ctrl-Green project. Usually, the automation of such administration issues is approached by building efficient and robust AMs, such as self-sizing, self-repair [21], robust reconfiguration [4] or consolidation [16]. 1.2 Coordination of managers Real systems have multiple dimensions to be managed. They do require the co-existence of multiple autonomic managers. However their uncoordinated execution can lead to interferences that could cause performance degradation or even inconsistency [1]. This is still an open problem in autonomic computing [17]. One solution consists in re-designing a new global loop taking into account combined effects, but this is even more complex than for individual loops, and is contrary to the benefits of modularity and re-usability of the AMs. Therefore, there is a deep need for methodological supports facilitating the coordination of multiple autonomic managers. Many approaches have been proposed for coordinating managers. For instance in the presence of quantitative metrics, like energy and performance, it is possible to define composition functions [6], for example involving notions of utility. Here, we consider the case of event-based coordination focusing on qualitative aspects. The coordination strategy can be ensured by an upper-level AM. This latter AM, above the individual AMs considered as MEs themselves, constitutes a coordination controller. http://www.en.ctrlgreen.org/ node delayable(r,c,e:bool) returns (a,s:bool) let automaton state Idle do a = false ; s = r and c until r and c then Active r and not c then Wait state Wait do a = false ; s = c until c then Active state Active do a = true ; s = false until e then Idle end twoTasks(r1, c1, r2, c2, e1, e2) = a1, s1 ; a2, s2 with c1, c2 (a1, s1) = delayable(r1, c1, e1) (a2, s2) = delayable(r2, c2, e2) Figure 1: Heptagon/BZR example: delayable task control(a) graphical / (b) textual / (c) exclusion contract. Some component-based frameworks, e.g., Fractal [5], provide a structural hierarchical framework, associating a control behavior locally to a component, where the problem of coordination can be addressed. AM components are equipped with notions of observability and controllability. However, hand-made methodologies remain complex and error-prone, and hard to re-use. The difficulty in designing coordinators is in the combinatorial complexity of cases of interferences, for which there is a need for support models and tools. Another way to look at it is to consider actions invocations and their firing conditions as events, and to enforce a control logic that prevents malfunction, based on states of the AMs. Coordinating AMs can then be seen as the problem of synchronization and logical control of administration operations which can be applied by AMs on the MEs in response to observed events. The combinatorial complexity of formal techniques at compilation time calls for methods explicitly considering scalability issues, e.g., through modularity. 1.3 Our approach and contribution In previous work [7, 9], we have defined the notion of controllable autonomous manager components, and proposed the reactive control of their coordinated assemblies in a hierarchical, systematic structure. Our approach involves formal models and techniques originally designed for reactive embedded systems. We adopt the so-called “synchronous” languages which are specially well-fit for the specification, validation and implementation of reactive kernels [11], which makes them relevant for the problem domain of autonomic loops. Additionally, we benefit from the Discrete Control Synthesis (DCS) technique, stemming from Control Theory [20, 19]: it enforces coordination logic between concurrent activities, in terms of events and states, with automated algorithms used off-line, at compilation time. However, in [9], our hierarchical proposal remained monolithic. In this paper, our contribution is to leverage the approach with a method stressing modularity, with benefits for the design of multiple-loop managers: (i) re-use of complex managers and their control specifications without modification, in different contexts, (ii) scalability for the state-space-based control technique, by breaking down its combinatorial complexity. Another contribution is the validation of our method in the coordination of a multi-loop autonomic multi-tier system, supporting multiple applications. The principle of our approach is to identify design constraints on AMs: observability and controllability, and to construct a component-based structure where they are explicit, in a way not involving modifying the AMs. Section 2 introduces background; Section 3 defines the modular specification and formalization of behaviors and coordination control objectives; Section 4 validates our approach on a class of multi-tier autonomic systems; Section 5 discusses related work; Section 6 concludes and draws perspectives. 2. BACKGROUND: REACTIVE CONTROL 2.1 Reactive languages and Mode Automata Reactive systems are characterized by their continuous interaction with their environment, reacting to flows of inputs by producing flows of outputs. They are classically modeled as transition systems or automata, with languages like State-Charts [13]. We adopt the approach of synchronous languages [14], because we then have access to the control tools used further. The synchronous paradigm refer to the automata parallel composition that we use in these languages, allowing for clear formal semantics, while supporting modeling asynchronous computations [12]: actions can be asynchronously started, and their completion is waited for, without blocking other activity continuing in parallel. The Heptagon/BZR language [8] supports programming of mixed synchronous data-flow equations and automata, called Mode Automata, with parallel and hierarchical composition. The basic behavior is that at each reaction step, values in the input flows are used, as well as local and memory values, in order to compute the next state and the values of the output flows for that step. Inside the nodes, this is expressed as a set of equations defining, for each output and local, the value of the flow, in terms of an expression on other flows, possibly using local flows and state values from past steps. Figure 1(a,b) shows a small Heptagon/BZR program. The node delayable programs the control of a task, which can either be idle, waiting or active. When it is in the initial Idle state, the occurrence of the true value on input r requests the starting of the task. Another input c can either allow the activation, or temporarily block the request and make the automaton go to a waiting state. Input e notifies termination. The outputs represent, resp., a: activity of the task, and s: triggering the concrete task start in the system’s API. Such automata and data-flow reactive nodes can be reused by instantiation, and composed in parallel (noted “|”) and in a hierarchical way, as illustrated in the body of the node in Figure 1(c), with two instances of the delayable node. They run in parallel, in a synchronous way: one global step corresponds to one local step for every node. The compiler produces executable code in target languages such as C or Java, in the form of an initialisation function reset, and a step function implementing the transition func- tion of the resulting automaton. It takes incoming values of input flows gathered in the environment, computes the next state on internal variables, and returns values for the output flows. This function is called at relevant instants from the infrastructure where the controller is used. ### 2.2 Discrete control and Heptagon/BZR Using a reactive language gives all the support of the the classical formal framework of Labelled Transition Systems (LTS), not formally described here due to space limitations. In this work, we focus on software engineering and methodology; formal techniques are not in the scope of this paper [8]. Particularly, we benefit from state-space exploration techniques, like Model-Checking or, more originally, Discrete Controller Synthesis (DCS). Initially defined in the framework of language theory [20], DCS has been adapted to symbolic LTS and implemented in tools within the synchronous technology [19]. It is applied on an FSM representing possible behaviors of a system, its variables being partitioned into controllable and uncontrollable ones. For a given control objective (e.g., staying invariantly inside a given subset of states, considered “good”), the DCS algorithm automatically computes, by exploration of the state space, the constraint on controllable variables, depending on the current state, for any value of the uncontrollables, so that remaining behaviors satisfy the objective. This constraint is inhibiting the minimum possible behaviors, therefore it is called maximally permissive. Algorithms are related to model checking techniques for state space exploration. If no solution is found, because the problem is over-constrained, then DCS plays the role of a verification. The Heptagon/BZR language \(^7\) includes a behavioral contract syntax [8]. It allows for the declaration, using the with statement, of controllable variables, the value of which being not defined by the programmer. These free variables can be used in the program to describe choices between several transitions. They are defined, in the final executable program, by the controller computed off-line by DCS, according to the expression given in the enforce statement. Knowledge about the environment such as, for instance event occurrence order can be declared in an assume statement. This is taken into account during the computation of the controller with DCS. Heptagon/BZR compilation invokes a DCS tool, and inserts the synthesized controller in the generated executable code, which has the same structure as above: reset and step functions. Figure 1(c) shows an example of contract coordinating two instances of the delayable node of Figure 1(a). The twotasks node has a with part declaring controllable variables c1 and c2, and the enforce part asserts the property to be enforced by DCS. Here, we want to ensure that the two tasks running in parallel will not be both active at the same time: not \((a_1 \text{ and } a_2)\). Thus, c1 and c2 will be used by the synthesized controller to delay some requests, leading automata of tasks to the waiting state whenever the other task is active. The constraint produced by DCS can have several solutions: the Heptagon/BZR compiler generates deterministic executable code by favoring, for each controllable, value true over false, in the order of declaration. ### 3. MODULAR COORDINATION In this section, we first introduce the basic elements of modeling for coordination by discrete control in Section 3.1, the notions explored in previous work [9] are redefined in a new way, so that given the need for modularity detailed in Section 3.2, it allows for their re-use to build up the method for modular coordination in Section 3.3. #### 3.1 Basic AMs coordination ##### 3.1.1 Behavior of managers We model an autonomic manager as a reactive data-flow component. As shown in Figure 2(a), it receives a flow m of monitor inputs that it analyses in a decision process based on a representation of the managed system status. It appropriately emits a flow of actions according to a management policy or strategy AM. Figure 2(b) shows a simple example of a manager behavior’s model. It has two execution states represented by S1 which is the initial state, and S2. In S1 when it receives the input msa, it emits sa stays in S1. We distinguish between such simple, short actions (instantaneous in the particular sense that they are completed within the execution of a step of the automaton) and long actions (asynchronous), as can be done classically with synchronous models [3]. Thus, when the automaton receives ml, it emits la and goes to S2 representing the processing of the action la. It returns back to S1 at the reception of nl notifying the completion of the asynchronous execution of la. In general, FSMs distinguish states useful for the coordination, as illustrated by concrete cases further in Section 4. ##### 3.1.2 Controllability of managers The controllability of the managers is considered here only at large-grain and consists in allowing or inhibiting the trigger of the management processes inherent to their management decisions. In the models, the control is represented by control variables, however its real implementation can be done in several different ways: for example the manager can be suspended and re-activated, or it can have an event interception mechanism. As shown in Figure 2(c), we exhibit the controllability of the manager by adding additional control variables c that allow to inhibit the actions. --- \(^7\) [http://bzh.inria.fr](http://bzh.inria.fr) a the manager can trigger. Without loss of generality, we consider one Boolean input for each action which we want to be controllable. If some actions are not controllable due to their nature (e.g., urgent recovery), or if the coordination problem does not require all of them to be controllable, then only the relevant ones can be associated with such a control condition. In general, additional outputs are also needed to exhibit an internal state $s$ of $AM$, necessary for the outside controller to compute $c$ e.g., in the case of a long action, informing that a notification $nl$ has not arrived yet. Figure 2(d) shows how we integrate control in the previous model. We add in $ctrl-mgr$ Boolean inputs $cl, cs$ for each corresponding action, to condition transitions firing, in conjunction with the monitoring, hence giving the possibility of inhibiting actions. Output $s$ exhibits the current state of the manager, typically the fact that a long action is executed. Note that the long action, once started, cannot be prevented or interrupted here. ### 3.1.3 Coordination of managers by control The coordination of several managers is defined by the composition of the models exhibiting the controllability of their behaviors to which we associate a behavioral contract specifying the coordination policy. Figure 3(a) shows a composite node $coord-mgrs$, its body corresponds to the parallel composition of the control models $ctrl-m$, of the managers to coordinate. The associated contract for their coordination consists of three statements. The $with$ statement is where their control variables $c_i$ are declared to be local to $coord-mgrs$, and to be controllable in terms of DCS, as introduced in Section 2.2. The $enforce$ statement gives the control in the form of a Boolean expression $Obj$ on variables from the nodes inputs or internal state $s_i$. The $assume$ statement is where knowledge about the environment is defined. For simplicity we do not use it. For example a coordination objective between two manager components, $AM_1$ and $AM_2$, can be to prevent $AM_2$ from triggering an action $a_2$ when $AM_1$ is in some state given by $s_1$. This is encoded by the following expression, to be made invariant by control: $not\ (s_1 \ and \ a_2)$. The generated controller enforcing the coordination policy, as in Figure 3(b), is in charge of producing appropriate values for the $c_i$ control inputs to the managers. The coordination logic acts as an additional component. It enforces a policy defined in the contract for managing the interactions between the $AM_i$ based on their inputs $m_i$, $n_i$ and state $s_i$. At this level, the DCS problem formally encoding the coordination problem can be solved using monolithic DCS. In case of a hierarchical structure, the main Heptagon/BZR node is constructed with the contract enforcing the conjunction of all the local objectives, declaring the union of all local controllable variables, and with a body composing all manager control automata in parallel. Hence the control is centralized since only one controller is in charge of enforcing the overall objectives, if the synthesis succeeds. ### 3.2 The need and means for modularity #### 3.2.1 Limitations and need for modularity Advantages of our DCS-based approach are manifold: (i) high-level language support for controller design (tedious and error-prone to code manually at lower level); (ii) automatized formal synthesis of controllers, correct by design (hard to guarantee manually); (iii) maximal permissiveness of controllers: they are minimally constraining, and in that sense optimal (even more difficult to obtain manually). However, until now the approach had not been leveraged to hierarchical modularity, and remained monolithic. This produces a unique controller enforcing the overall control objectives. However, when considering a large number of managers, this monolithic approach might not succeed, because exploring the large state space would be very time consuming. This can take several days and can fail due to computing resource limits. This limits the scalability of the approach. Furthermore, a modification, even partial, leads to a recomputation of the overall coordinated composition invalidating previous generated codes which limits the re-usability of management components. To address this issue, we want to exploit modular DCS, where the control objectives can be decomposed in several parts, each part managed by a controller. Each controller manages a limited number of components. This decreases the state space to explore for the synthesis of each controller. The recomputation of a controller that has no impact on other controllers does not require the recomputation of the latter. This makes possible the re-use of controllers generated codes. Not only autonomic managers are available for re-use but coordinated assemblies of managers can also be made available for further re-use. In the following Sections we detail how modular DCS is used to obtain this scalability and re-usability of management components. #### 3.2.2 Modular contracts in Heptagon/BZR Modular DCS consists in taking advantage of the modular structure of the system to control locally some subparts of this system [19]. The benefits of this technique is firstly, to allow computing the controller only once for specific components, independently of the context where this component is used, hence being able to reuse the computed controller in other contexts. Secondly, as DCS itself is performed on a subpart of the system, the model from which the controller is synthesized can be much smaller than the global model of the system. Therefore, as DCS is of practical exponential complexity, the gain in synthesis time can be high and it can be applied on larger and more complex systems. ![Figure 3: Single-level coordination of managers](image) ![Figure 4: Modular contracts in Heptagon/BZR](image) Heptagon/BZR benefits from the modular compilation of the nodes: each node is compiled towards one sequential function, regardless of its calling context, the inside called nodes being abstracted. Thus, modular DCS is performed by using the contracts as abstraction of the sub-nodes. One controller is synthesized for each node supplied with local controllable variables. The contracts of the sub-nodes are used as environment model, as abstraction of the contents of these nodes, to synthesize the local controller. As shown in Figure 4, the objective is to control the body and coordinate sub-nodes, using controllable variables $c_1, \ldots, c_n$, given as inputs to the sub-nodes, so that $G$ is true, assuming that $A$ is true. Here, we have information on sub-nodes, so that we can assume not only $A$, but also that the $n$ sub-nodes each do enforce their contract: $\bigwedge_{i=1}^n (A_i \implies G_i)$. Accordingly, the problem becomes that: assuming the above, we want to enforce $G$ as well as $\bigwedge_{i=1}^n A_i$. Control at composite level takes care of enforcing assumptions of the sub-nodes. This synthesis considers the outputs of local abstracted nodes as uncontrollable variables, constrained by the nodes’ contracts. A formal description, out of our scope here, is available [8]. ### 3.3 Modular coordination principle With modularity, we can decompose the coordination policy in several parts structured in a hierarchical way. This involves to make coordinated assemblies themselves controllable. In contrast to the monolithic DCS, the modular DCS allows to construct local controllers so that they can be reused in an assembly composite to form a global control. These local controllers can also be the composition of sub-controllers themselves. The control is decentralized in the sense that each part of the assembly handles part of the control. The first step to achieve a modular control is to make a coordinated assembly composite controllable. This can be seen as making the latter expose their controllability (like AMs before) in order to allow to enforce further additional coordination policy for a global management. #### 3.3.1 Controllable coordinated managers In order for a controller to be controllable, it must enforce local objectives defining the local control strategy, as well as outside objectives. The enforcement of the outside objectives is required to allow the re-use of the controller in different contexts in which additional control strategies have to be enforced beside the predefined local one. Hence the outside objectives describe the guarantee of a control strategy received from elsewhere. This must be explicitly part of the contract of the controller. Starting from a coordinated composite as before, making the latter controllable is achieved by first equipping it with controllable Boolean inputs $c_i'$ for each of the actions to be controlled. The second step is to install a way for the node to exhibit information about its state to outside. It can be done by outputting state information $s_i'$ directly as suggested informally in Figure 5(a). Alternately, in order to formalize things in a way enabling modular DCS, we transform the enforce part of the contract, so that it can be used in an upper-level contract as environment model, as explained above and in Section 3.2.2. We modify the objective into the conjunction of the previously seen local objective $Obj_i$ and a term $Obj_{m_i}$, formalizing the fact that when the new control variable $c_i'$ is false, it does inhibit its associated action $a_i$, i.e., it implies that it is false. For each action, its associated outside control objective for its inhibition is formulated as follows: $(c_i' \Rightarrow \neg a_i)$. However depending on the type (short or long) of the action the objective is translated differently. For short action it is translated directly to: $Obj_{m_i} = (c_i' \text{ or not } a_i)$. Long actions must be handled differently, because once $a_i$ is triggered, $c_i'$ can no longer prevent or interrupt it. Therefore, in order to make this explicit in the local contract, to be used by upper-level contracts, we must link the values of $a_i$ (triggering of the action) and $s_i$ (current state of the action). This is done by saying that, if the action was not active at the previous instant, i.e., that $s_i$ was false (not (false $\neg fby$ $s_i$)) and $a_i$ is not true at the current instant, then $s_i$ will remain false. As before, $c_i'$ can prevent the triggering of the action, i.e., that $a_i$ becomes true. Hence, $Obj_{m_i} = \text{LongActions}(c_i', a_i, s_i)$ defined by: \[ \text{LongActions}(c_i', a_i, s_i) \overset{\text{def}}{=} (c_i' \text{ or not } a_i) \land (\neg a_i \land s_i' \Rightarrow \neg s_i) \] As illustrated in Figure 5(b) in the node ctrl-coord-mgrs a DCS problem will be solved, by taking as control objective to be made invariant: $Obj \land Obj_{m_i}$, where $Obj_{m_i}$ of this level of contract is defined as previously explained. The sub-nodes $M_i$ each exhibit their contract $Obj_i$, which includes the local modularity term as above. Hence, the DCS problem at this level will make the assumption that $\bigwedge_i Obj_{j_i}$ is enforced by lower-level contracts and coordination controllers, as explained in Section 3.2.2. #### 3.3.2 Modular coordination of managers As composites have been made controllable in the same way as managers, they can be used to construct coordinated assemblies modularly. Re-use of instances of composites is made seamless in new assemblies. For example, the previous $\text{fby}$ is an Heptagon operator introducing an initialized delay: $v$ fby $x$ denotes the previous value of $x$, initialized with $v$ at the first instant. 4. MULTI-LOOP MULTI-TIER SYSTEMS We apply and validate our approach to multi-loop multi-tier systems, typical of the domain of data centers administration. This work is done in the framework of the Ctrl-Green project, in cooperation with Eolas, who make a business in providing Cloud services. 4.1 Datacenter management 4.1.1 Multi-tier replication based system The JEE multi-tier applications we consider, as shown in Figure 6, consists of: an apache web server, receiving incoming requests, and distributing them with load balancing to a tier of replicated tomcat servers. The latter access to the database through a mysql-proxy server, which distributes the sql queries, with load balancing, to a tier of replicated mysql servers. The global system running in the data-center consists of a set of such applications in parallel. Figure 6: Multi-loop JEE Multi-tiers application. 4.1.2 Autonomic managers For each AM, we describe its target, aim, input sensors, output actions (short or long), and controllability. Self-sizing targets replicated servers. It aims at lowering the resources usage while preserving the performance. It automates the process of adapting the degree of replication depending of the system load measured through the CPU usage of the servers hosts. The desired state is delimited by thresholds, minimum and maximum. Periodically, an exponentially weighted moving average (EWMA), \( \text{cpu}_{\text{Avg}} \), is computed. When \( \text{cpu}_{\text{Avg}} \) is higher than the maximum threshold (i.e., overload), it triggers size-up (a long action) for the provision of a new replica. When \( \text{cpu}_{\text{Avg}} \) is lower than the minimum threshold (i.e., underload), it triggers size-down (short action) for the removal of a replica. Each of these two actions can be inhibited. Self-repair targets a server as well as replicated servers. It aims at preserving the availability of the server service. It manages fail-stop failure detected through heartbeat, and automates the process of restoring a server when it fails. It triggers the repair (long action, can be inhibited) of a failed server which consists in deploying the server on a new host, configuring it, and launching it. For replicated servers, the degree of redundancy is restored to tolerate up to \( m-1 \) failures of \( m \) servers during the mean time to repair. Consolidation targets the global virtualized data-center. It aims at optimizing global resource usage while preserving system performance. It automates the process of adapting the computing capacity made available in a virtualized data-center. It periodically evaluates the resources allocated to the virtual machines (VM) and the available computing capacity, and plans long actions to either reduce (Decr) or increase (Incr) the capacity. In this work, we use VMware DPM for power management in a virtualized data-center. It plans migration actions to deliver more resources to the overloaded VMs, which can require to turn physical servers on. When the physical servers are under-utilized, it plans migration actions to turn some servers offline. It can be controlled by delaying or cancelling the actions. Controllability of the consolidation manager is considered here only at large-grain: an interesting perspective is finer-grain control, between the sequential phases of this complex operation, but it requires difficult determination of appropriate synchronization points. 4.1.3 Coordination problems As seen in Figure 6, within a multi-tier application, the failure of a server in a replicated tier can cause a saturation (hence temporary overload) of the remaining servers due to the fail-over mechanism. Furthermore, each tier depends on its predecessor (e.g., load balancer) since its service is requested by the latter. An increase of the requests received from its predecessor increases its activity and reciprocally. However the decrease of the requests can be caused by a failure, which can cause a temporary underload, and useless sizing operations. At the global level of the data-center, the uncoordinated execution of instances of self-sizing and self-repair at the same time as consolidation can lead to failures of actions triggered by the managers. The execution of a consolidation plan can take a long time to complete and its success as well as its efficiency depends on the consistency of the state of the data-center along the process. The adding, repair and removal actions, occurring at any time, can invalidate a consolidation plan being executed, which did not anticipate them. This can cause failure of migration operations or inefficiency of the consolidation. Consolidation can also cause failure of adding and repair actions e.g., it can reduce the computing capacity of the VMs. 4.1.4 Coordination policy To avoid the above interferences, policies are defined, to be enforced by inhibiting some managers accordingly. 1. Within a replicated tier, avoid size-up when repairing. 2. Within a load-balanced replicated tier, avoid size-down when repairing the load-balancer. 3. In multi-tiers, more generally, avoid size-down in a successor replicated tier when repairing in a predecessor. 4. At global data-center level, when consolidating, avoid self-sizing or repairing. 5. Wait until repairs or add finish before consolidation decreasing, and until removals finish before increasing. 4.2 Modular control model In this Section we formalize the previous description, by modelling the behaviors of individual managers, and their coordination policy, in the form of a DCS problem, following the method of Section 5.3. 4.2.1 Modelling the managers control behaviors Self-sizing control is actually an instance of the general pattern of Figure 2(d) node ctrl-mgr, with outputs : long action \( add \), short action \( rem \) and busy state \( adding \); with inputs : controls \( ca \) and \( crm \) for the actions, monitoring overload \( o \) and underload \( u \), and adding notification \( na \): \[ (\text{add}, \text{rem}, \text{adding}) = \text{self-sizing}(ca, \text{crm}, o, u, na) \] Self-repair control is a simpler case, with only a long action of repairing. This can also be defined as an instance of the node \( \text{ctrl-mgr} \) of Figure 2(d) with outputs : long action \( rep \), and busy state \( repairing \); and inputs : control \( ca \), monitoring failure \( fail \), and notification of repair done \( nr \). Unused parameters for short actions of the \( \text{ctrl-mgr} \) node can be, for inputs, given the constant value \( \text{false} \), and for outputs be left unused. This defines the new node: \[ (\text{rep}, \text{repairing}) = \text{self-repair}(cr, \text{fail}, nr) \] 4.2.2 Coordination objectives The parallel composition of instantiations of the above automata describes the coexistence of the instances of the corresponding managers. The control is specified on this composed behavior. We formalize the strategy of Section 4.1.4. 1. Within each replicated tier, avoid size-up when repairing: \( \text{not} \) (repairing and \( add \)) 2. Avoid size-down when repairing the load-balancer: \( \text{not} \) (repairing and \( rem \)) 3. In multi-tiers, more generally, between predecessors and successors: \( \text{not} \) (repairing and \( \text{rem} \) or adding and \( \text{rem} \)) 4. When consolidating, avoid repair and sizing: \( \text{not} \) ((Inc or Decr) and (repairing or Adding or \( rem \) or \( rem \)) 5. Wait for consolidation decreasing until repairs or add finish: \( \text{not} \) (repairing or Adding and \( sd \)) and for increasing when removals: \( \text{not} \) (\( rem \) and \( si \)) 4.3 Exploiting the models with DCS 4.3.1 Monolithic synthesis In order to evaluate the benefit of modularity, we make the exercise of performing DCS the classical way. \[ (\ldots) = \text{Main_node}(\ldots) \] \[ \begin{align*} (\text{rep}, \text{repairing}_1) &= \text{self-repair}(ca_1, \text{fail}_1, nr_1) \\ (\text{rep}_2, \text{repairing}_2) &= \text{self-repair}(ca_2, \text{fail}_2, nr_2) \\ (\text{add}_1, \text{rem}_1, \text{adding}_1) &= \text{self-sizing}(ca_1, crm_1) \\ (\text{add}_M, \text{rem}_M, \text{adding}_M) &= \text{self-sizing}(ca_M, \ldots) \\ (\text{si}, \text{sd}, \text{Incr}, \text{Decr}) &= \text{consolidation}(ci, cd, i, d, e) \end{align*} \] Figure 8: Monolithic node. The specification of the monolithic control is encoded in a single composite node, shown in Figure 8, grouping all instances of involved managers composed in parallel in its body, and a conjunction of all control objectives in its contracts. This can be tedious and complex when a huge number of managers are considered. It does not allow a decentralized control because the overall control objectives are grouped in the single upper-level node. The structure of this coordination is shown in Figure 14. 4.3.2 Modular synthesis We present reusable nodes bottom-up, as shown in Figure 9, from left to right: we first build the coordination controller for self-sizing and self-repair in a replicated tier. The latter controller is re-used for the coordination of managers in two consecutive tiers, the front tier being a load balancer for the second tier constituted of replicated servers. The resulting controller is re-used for the coordination of a multi-tier system. Figure 9: Bottom-up re-use of nodes. Replicated servers tier. The composite node shown in Figure 10 specifies the control of instances of self-sizing and self-repair managing the same replicated tier. Its control is composed of four objectives, one for the local coordination: (not (reparing and add)), while the rest concerns the guarantees of an enforcement of a coordination strategy from outside the node. The control from outside is received through the input variables cr', ca' and crm'. As can be seen here, the modularity objective is very systematic and could easily be covered in syntactic sugar. Figure 10: Replicated tier node. Load balancer and replicated servers tier. In this node, shown in Figure 11 we re-use an instance of the above composite node from Figure 10 and an instance of self-repair, dedicated to the management of a load balancer in front of the replicated servers which distributes the incoming load to them. Here also, the local coordination strategy to be enforced: not (reparingL and remove), is complemented with modularity objectives. Figure 11: Load-balanced replicated tiers node. Application. The node of Figure 12 coordinates two instances of the previous node from Figure 11 for the control of instances of self-sizing and self-repair managing two consecutive load-balanced replicated tiers. The coordination strategy consists in preventing size-down in the back-end load-balanced replicated tier ("successor") when a failure is being repaired in the front. This is expressed as follows: (not (reparingL or repairing) and remL) Figure 12: Multi-tier application node. Global system: data center. The whole multi-application system will be constructed progressively, by first considering the two-application case. Figure 13 shows the node and contract instantiating the previous node for each of them, as well as a consolidation manager. At this level of control, only the coordination strategy between the multi-tiers applications and the consolidation manager is specified, the control within multi-tier applications being delegated to the instance of the previous node modelling it. Having more applications in a data-center is done by composing an instantiation re-using the previous node, with a new instantiation re-using the application node. The contract of this new composition is similar to the one in Figure 13. This enables a hierarchical construction of the control of an N-application. Figure 13: Two-application data-center. Comparisons and discussion. Advantages of modularity can be seen here, in terms of the objectives of Section 1.3. Regarding the specification aspect (objective (i) of Section 1.3), tiers and groups of tiers are described locally, including their control, and assembled hierarchically, as shown in Figure 15 instead of having all automata on one side, and all contracts on the other side, in the monolithic case as shown in Figure 14. This favors the re-use of Heptagon/BZR nodes in different contexts. In particular, the repair manager is re-used in the replicated tier and for the load balancer. More significantly, because it has a contract and controller, the coordinated load-balanced and replicated tier is used twice in an application, with a difference in the controls, in that the downstream one is submitted to more constraints than the upstream one. On the other aspect, the combinatorial complexity of DCS and the cost of compilation of the controllers (objective (ii) of Section 13; for various sizes of the system (i.e., various number of applications), we have performed Heptagon/BBZ compilations and synthesis, the results of which are shown in Table 4. Comparative costs of DCS, monolithic and modular, for the different cases varying in number of applications in the data-center, are given in terms of compilation CPU time, and memory usage. For small numbers of applications, values are not significant for memory; at 4 applications, the monolithic approach reaches the limits of the natural combinatorial explosion of the state-space exploration techniques; the computation was not finished after more than two days, and no values were sought for larger systems. The other approach, benefiting from modularity, goes significantly further, even if still presenting growing costs. In brief, we can see that monolithic DCS is exponentially costly in the size on the system, whereas modular DCS keeps producing results, showing scalability. <table> <thead> <tr> <th>nb. app.</th> <th>Synthesis time</th> <th>Memory usage</th> </tr> </thead> <tbody> <tr> <td></td> <td>monolithic</td> <td>modular</td> </tr> <tr> <td>1</td> <td>4s</td> <td>5s</td> </tr> <tr> <td>2</td> <td>49s</td> <td>11s</td> </tr> <tr> <td>3</td> <td>42m24s</td> <td>24s</td> </tr> <tr> <td>4</td> <td>&gt; 2 days</td> <td>1m22s</td> </tr> <tr> <td>5</td> <td>-</td> <td>4m30s</td> </tr> <tr> <td>6</td> <td>-</td> <td>13m24s</td> </tr> <tr> <td>7</td> <td>-</td> <td>25m57a</td> </tr> <tr> <td>8</td> <td>-</td> <td>30m36a</td> </tr> <tr> <td>9</td> <td>-</td> <td>2h11m</td> </tr> <tr> <td>10</td> <td>-</td> <td>9h4m</td> </tr> </tbody> </table> Table 1: DCS: duration and memory usage. Although we show the total compilation time in Table 1, the synthesis of the control logic of each node equipped with a contract is performed independently. A composite node which is the assembly of sub-nodes equipped with a contract, requires just the contract defined in the sub-nodes at compilation for the synthesis of its control logic. Therefore the compilations can be run in parallel. Furthermore, the recompilation of the composite node is necessary only when their interface (inputs, outputs) and their contract are modified, otherwise it can be re-used as such. 4.4 Implementation The system we described has been implemented on our experimental data-center. Figure 16 shows uncoordinated executions in which failures occur in 16(b) at 17 min (Apache server fails), and in 16(b) at 19 min (Tomcat server fails). In 16(b) the failure leads to an underload in Tomcat and Mysql tier causing the removal of a replicated server in each tier. In 16(b) the failure causes an underload in Mysql tier which leads to the removal of a replica, as seen in the square-edged curve (numbers of replica) going down. However, the degree of replication is restored after the repair of the failed server, by re-adding the uselessly removed server as shown in 16(b) at 21 min and 28 min, and in 16(b) at 25 min. By contrast, in Figure 17 for executions coordinated by the controllers as expected, reaction to the underloads during the failure repair (in 17(a) 20min, and in 17(b) 17min) is inhibited, square-edged curves remaining flat, hence the system administration saves unnecessary operations. 5. RELATED WORK The general question of coordinating autonomic managers remains an important challenge in Autonomic Computing 7, although it is made necessary in complete systems with multiple loops, combining dimensions and criteria. Some works propose extensions of the MAPE-K framework in order to allow for synchronization, which can be e.g., through the access to a common knowledge 2. A distinctive aspect of our approach is to rely on explicit automata-based behavioral models, amenable to formal techniques like verification or the more constructive DCS. Coordination of multiple energy management loops is done in various ways, e.g., by defining power vs. performance trade-offs based on a multi-criteria utility function in a non-virtualized environment 6, or also tuning mechanisms as in OptiTuner 15. These approaches seem to require modifying AMs for their interaction, and to define the resulting behavior by quantitative integration of the measure and utilities, which relies on intuitive tuning values, not handling logical synchronization aspects. We coordinate AMs by controlling their logical activity state, rather than modifying them. Concerning decision and control of autonomic systems, some approaches rely upon Artificial Intelligence and planning 22, which has the advantage of managing situation where configurations are not all known in advance, but the corresponding drawback of costly run-time exploration of possible behaviors, and lack of insured safety of resulting behaviors. Our work adheres to the methodology of control theory, and in particular Discrete Event Systems, applied to computing systems 14. Compared to traditional error-prone programming followed by verification and debugging, such methods bring correctness by design of the control. Particularly, DCS offers automated generation of the coordination controller, facilitating design effort com- pared to hand-writing, and modification and re-use. Also, maximal permissivity of synthesized controllers is an advantage compared to over-constrained manual control, impairing performance even if correct. Applications of DCS to computing systems have not been many until now; it has been applied to address the problem of deadlock avoidance [24]. Compared to this, we consider more user-defined objectives. Works on compositional verification have brought some issues which can be related to modular controller synthesis. As instance, a method for automatic assumption generation have been proposed [10]. It relies on algorithms for the generation of automata based on language equivalence, in order to generate intermediary assumptions for compositional verification. Compared with modular controller synthesis, the generated automata do not act upon the system, and only helps its verification. Nevertheless, an interesting perspective would be to consider mixing the two techniques, in order to facilitate the controller synthesis, and relieve the programmer from the burden of writing intermediary assumptions. Though, this technique cannot be applied as is, as assumptions cannot be inferred from properties to be enforced, without knowledge about the generated controller. 6. CONCLUSIONS We put the principle of modularity in practice for the problem of coordination in multiple-loop autonomic management, in a component-based approach. Instead of re-designing a global combined loop, we benefit from the advantages of modularity, by defining a new method. We propose a general design methodology based on formal modelling with automata, and the application of DCS to obtain automatically correct controllers. We leverage modularity in this approach, and confront it to commensurate experiment on a real-world multi-tier, multi-service-level system. We achieve our objectives of Section 5.1.1 by: 1. enabling re-use and coordination of complex administration managers, through their control specifications, 2. modularizing the DCS, thereby breaking down the exponential complexity of the basic algorithms On the latter point, the gain in compilation-time synthesis opens new perspectives on the scalability of our method, and its applicability to larger systems. Perspectives are at different levels. The general method is systematic enough to form the basis of an administration management-level Domain Specific Language (DSL), allowing for a designer to construct systems for which the formal automata models and control objectives can be generated automatically. Improvement of the DCS technique is ongoing to integrate not only logical but also quantitative aspects in the synthesis algorithms, like consumption or load. Also, the compilation using modular DCS produces a modular code which opens perspectives for a distributed execution which is an ongoing work. 7. REFERENCES
{"Source-Url": "https://hal.archives-ouvertes.fr/file/index/docid/1006106/filename/cbse291-Gueye.pdf", "len_cl100k_base": 10829, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40962, "total-output-tokens": 12818, "length": "2e13", "weborganizer": {"__label__adult": 0.0003237724304199219, "__label__art_design": 0.000545501708984375, "__label__crime_law": 0.0003275871276855469, "__label__education_jobs": 0.0010824203491210938, "__label__entertainment": 9.191036224365234e-05, "__label__fashion_beauty": 0.0001704692840576172, "__label__finance_business": 0.0004703998565673828, "__label__food_dining": 0.0003311634063720703, "__label__games": 0.0006928443908691406, "__label__hardware": 0.001461029052734375, "__label__health": 0.0006251335144042969, "__label__history": 0.0004088878631591797, "__label__home_hobbies": 0.00014078617095947266, "__label__industrial": 0.0006389617919921875, "__label__literature": 0.0003018379211425781, "__label__politics": 0.0003345012664794922, "__label__religion": 0.0005331039428710938, "__label__science_tech": 0.125732421875, "__label__social_life": 9.191036224365234e-05, "__label__software": 0.0121612548828125, "__label__software_dev": 0.8525390625, "__label__sports_fitness": 0.00024771690368652344, "__label__transportation": 0.0006608963012695312, "__label__travel": 0.00024008750915527344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53887, 0.03888]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53887, 0.50613]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53887, 0.90095]], "google_gemma-3-12b-it_contains_pii": [[0, 1073, false], [1073, 6320, null], [6320, 12275, null], [12275, 17853, null], [17853, 23806, null], [23806, 29573, null], [29573, 34566, null], [34566, 38963, null], [38963, 42396, null], [42396, 47800, null], [47800, 53887, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1073, true], [1073, 6320, null], [6320, 12275, null], [12275, 17853, null], [17853, 23806, null], [23806, 29573, null], [29573, 34566, null], [34566, 38963, null], [38963, 42396, null], [42396, 47800, null], [47800, 53887, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53887, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53887, null]], "pdf_page_numbers": [[0, 1073, 1], [1073, 6320, 2], [6320, 12275, 3], [12275, 17853, 4], [17853, 23806, 5], [23806, 29573, 6], [29573, 34566, 7], [34566, 38963, 8], [38963, 42396, 9], [42396, 47800, 10], [47800, 53887, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53887, 0.05752]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
f8c588fb77865e2804b3b6752555cff03cb36a9b
Model-based, event-driven programming paradigm for interactive web applications The MIT Faculty has made this article openly available. Please share how this access benefits you. Your story matters. Model-Based, Event-Driven Programming Paradigm for Interactive Web Applications Aleksandar Milicevic Daniel Jackson Massachusetts Institute of Technology Cambridge, MA, USA {aleks,dnj}@csail.mit.edu Milos Gligoric Darko Marinov University of Illinois at Urbana-Champaign Urbana, IL, USA {gliga,marinov}@illinois.edu Abstract Applications are increasingly distributed and event-driven. Advances in web frameworks have made it easier to program standalone servers and their clients, but these applications remain hard to write. A model-based programming paradigm is proposed that allows a programmer to represent a distributed application as if it were a simple sequential program, with atomic actions updating a single, shared global state. A runtime environment executes the program on a collection of clients and servers, automatically handling (and hiding from the programmer) complications such as network communication (including server push), serialization, concurrency and races, persistent storage of data, and queuing and coordination of events. 1. Introduction Today’s era of social networks, online real-time user collaboration, and distributed computing brings new demands for application programming. Interactiveness and multi-user experience are essential features of successful and popular applications. However, programming such inherently complex software systems, especially when the interactive (real-time) multi-user component is needed, has not become much easier. Reasons for this complexity are numerous and include: 1. the distributed architecture of multiple servers running on the cloud (server farms) interacting with clients running on different platforms (e.g., smartphones, web browsers, desktop widgets, etc.); 2. the abstraction gap between the problem-domain level (high-level, often event-driven) and the implementation-level (low-level messages, queues, schedulers, asynchronous callbacks); 3. shared data consistency; 4. concurrency issues such as data races, atomicity violations, deadlocks, etc. Problems of this kind are known as accidental complexity [13], since they arise purely from abstraction mismatches and are not essential to the actual problem being solved. Carefully managing accidental complexity, however, is absolutely crucial to developing a correct and robust system. Although thoroughly studied in the literature, these problems not only pose serious challenges even for experienced programmers, but also distract the programmer from focusing on essential problems, i.e., designing and developing the system to achieve its main goals. We propose a new model-based programming paradigm for designing and developing interactive event-driven systems, accompanied by a runtime environment for monitored execution of programs written in that language. Our paradigm is structured around models (mostly declarative, but fully executable) using concepts from the domain of interactive web applications, (e.g., shared data, system events, interactions and interconnections between clients, etc.), and also explicitly separating concerns like data, core business logic, user interface, privacy and security rules, etc. This allows the programmer to think and write code at a high-level, close to the actual problem domain, directly addressing the abstraction gap issue. The structural information about the system, which is inherently present in these models, allows the runtime environment to automatically manage many forms of accidental complexity, from synchronizing and dispatching concurrent events to propagating data updates to all connected clients (also known as “server push” in the web developers community). The programmer, therefore, has a very simple sequential programming view, and it is the job of the runtime environment to turn that into a distributed application. Relieving the programmer of writing multithreaded code eliminates, by construction, a whole class of concurrency bugs, which are notoriously difficult to debug and fix. We call this whole approach SUNNY, as our goal is to shine some light on the dark world of distributed systems, making it less tedious and more fun, and, at the same time, more robust and more secure. In this paper, we also present a concrete implementation of this approach for Ruby on Rails, which we call RED (Ruby Event Driven). 2. Example In this section we present a simple example of a real-world application to explain the proposed programming paradigm and illustrate the expressiveness and ease of use of our language. Our intention in this example is to implement a “public IRC” (Internet Relay Chat) web application, meaning that anyone can create a chat room (provided that a room with the same name does not already exist) and that the existing rooms are public (anyone can join and send messages once joined). With most applications of this kind, the web GUI must be responsive and interactive, automatically refreshing parts of the screen whenever something important happens (e.g., a new message is received), without reloading the whole page. Figure 1 shows a simple IRC implementation written in RED (our implementation of SUNNY for Ruby on Rails). RED programs consist of several different models of the system (described next), and as such are fully executable. These models are fairly high-level and mostly declarative, so we occasionally refer to them as specifications. The data model of the IRC application consists of a User record (which specializes the RED library AuthUser record and adds a status field), a Msg record (where each message has a textual body and a sender), and a ChatRoom record (each room has a name, a set of participating users, and a sequence of messages that have been sent). These fields are defined using the refs and owns keywords: the former denotes aggregation (simple referencing, without any constraints), and the latter denotes composition (implying that (1) when a record is deleted, all owned records should be deleted, and (2) no two distinct records can point to the same record via the same owned field). The network model in this example consists of two machines, namely Server and Client. The Client machine has a corresponding User, whereas the Server machine maintains a set of active ChatRooms. They respectively inherit from the library AuthClient and AuthServer machines, to bring in some fairly standard (but library-defined, as opposed to built-in) user management behavior, like new user registration, sign-in and sign-out events, etc.1 To implement the basic functionality of IRC, we defined an event model with three event types: CreateRoom, JoinRoom, and SendMessage, as shown in Figure 1(c). Each event has an appropriate precondition (given in the requires clause) that checks that the requirements for the event are all satisfied before the event may be executed. For instance, events CreateRoom, JoinRoom, and SendMessage all require that the user has signed in (client.user is non-empty), SendMessage requires that the user has joined the room, etc. A specification of the effects of an event (given in the ensures clause) is concerned only with updating relevant data records and machines to reflect the occurrence of that event. For example, the effects of the JoinRoom event amount to simply adding the user requesting to join the room to the set of room members; the runtime system will make sure that this update is automatically pushed to all clients currently viewing that room. Actions like updating the GUI are specified elsewhere, independently of the event model; this is a key to achieving separation of concerns. By default, all fields in our models are public and visible to all machines in the system. That approach might be appropriate for the running “public IRC” example, where everything is supposed to be public anyway. For many other systems, however, it is often necessary to restrict access to sensitive data. Let us therefore define some privacy rules even for this example to show how that can be done in SUNNY, declaratively and independently of the event model. The HideUserPrivateData policy from Figure 1(d) dictates that the value of a user’s password should not be revealed to any other user and, similarly, that the status message of a user should not be revealed to any other user, unless the two users are currently both members of the same chat room. Note that the latter rule is dynamic, i.e., it depends on the current state of the system (two users being together in a same chat room) and thus its evaluation for two given users may change over time. In addition to restricting access to a field entirely, when a field is of a collection type, a policy can also specify a filtering condition to be used to remove certain elements from --- 1 The full listing of the RedLib::Web::Auth library is given in Figure 2; several events defined in this library are referred to later in the text. Figure 1. A full implementation (excluding any GUI) of a simple public IRC application written in RED, our new domain-specific language for programming event-driven systems. Since RED is very high-level and mostly declarative, we often refer to RED programs as models, and also consider them to be the specification of the system. that collection before the collection is sent to another machine. The FilterChatRoomMembers policy hides those members of a chat room who have not sent any messages (this simulates, to some extent, “invisible users”, a feature supported by some chat clients). SUNNY automatically checks policies at every field access; if any policy is violated the access is forbidden simply by replacing the field value with an empty value. 3. Why The World Needs SUNNY Interactive multi-user applications, even when having relatively simple functional requirements, are difficult to write using today’s programming languages and available state-of-the-art frameworks, the main reason being the abstraction gap between the problem domain and the concepts available at the implementation level. Just as one example, current systems typically do not offer much help with structuring and organizing the system ### 3.1 The Java Approach The Java language, which gained much of its success from being proposed as a platform for web development, is still one of the top choices for development of enterprise web systems. The language being mature, the runtime (JVM) being fast and solid, and an abundance of freely available third-party libraries are some of the points in favor. The trend of web development in Java still seems to be based around manually configuring and integrating a multitude of standalone, highly specialized libraries, designed independently to solve various web-related tasks, as opposed to having a single overarching framework designed to address most of the common issues. A highly experienced Java expert, who is already familiar with the existing libraries for web development, object-relational mapping, database management, server push, and such (also already knowing how to configure them all so that they can interoperate and work well together) would have a good start developing our IRC example. For the rest of us, however, the situation is much worse. For someone already familiar with Java, but not too familiar with web development in Java, the effort just to get a handle on all the necessary libraries would be far exceed the effort needed to implement the functionality of our example. Even the expert would have to be very careful about managing concurrent requests on the server side, setting up event processing queues (to avoid common concurrency issues but still achieve good throughput), implementing corresponding producer and consumer threads, and so on. Probably equally cumbersome would be manually keeping track of which clients are viewing what, automatically propagating updates when the data underlying the views change, and implementing Ajax-style code on the client side to refresh the GUI smoothly. All these are generic enough tasks, for which one of the top choices for development of enterprise web systems. The language being mature, the runtime (JVM) being fast and solid, and an abundance of freely available third-party libraries are some of the points in favor. ### 3.2 The Rails Approach In contrast to Java, the design of Rails [6] adopted the “convention over configuration” school of thought: instead of manually configuring every single aspect of the application, if certain conventions are followed, the Rails framework will automatically perform most of the boilerplate tasks behind the scene and “magically” make things happen. Underneath the surface, unfortunately, it is still a configuration mess, and the magic is mostly concerned with low-level configuration of different components and how to tie time implementing them for the project at hand, which is, for the barriers mentioned above, time-consuming, tedious, and also error-prone. We illustrate these points in terms of three concrete platforms for developing web applications. ![Figure 2.](image) The RedLib::Web::Auth library module, written in RED itself, provides common records and events for managing users and user authentication. them all together. This creates problems for many Rails programmers, because, as this magic has no high-level semantics, it is often difficult to understand and remember not only how it works, but also what it does. In SUNNY, we aim to offer a different kind of magic, which is easy to understand at the conceptual level (e.g., data updates are automatically propagated to clients, all the way to automatically refreshing the GUI), so the programmer need not understand the technical details behind its implementation. By imposing some structure on how the system should be organized and implemented (e.g., using the Model View Controller (MVC) architecture), Rails can indeed provide a lot of benefits for free. One of the most appealing features of Rails (especially back when it first appeared) is “scaffolding”: given just a few model files describing how the data structures are organized, Rails automatically generates a running web application, with the full stack, from the database to the web server, automatically configured and set up. While scaffolding greatly reduces the startup cost of developing a new application (even for inexperienced programmers), it is not meant to be a permanent, system-level solution. The reason is that it is based on code generation from transient models: the generated files (including database configuration files, Rails controller classes, HTML views) work fine at the beginning, but as soon as something needs to be changed, everything needs to be changed manually, since there is nothing to keep them in sync otherwise. Furthermore, the models used for scaffolding support only scalar, primitive-typed fields. In SUNNY, in contrast, models (like those shown in Figure 1) are first-class citizens; not only do they exist at runtime, but they are central to the whole paradigm (i.e., the entire runtime semantics is built around them). Our models are also much richer, so there is enough information available to the SUNNY runtime environment to interpret them on the fly, instead of generating code up front. That way, the common problem of having inconsistencies between the models and the code is eliminated in SUNNY. Concurrency in Ruby is an interesting topic. Ruby is inherently not concurrent (because of a Global Interpreter Lock). As a result, Rails programmers can safely ignore threads and synchronization, and still have no data race issues. This, of course, comes at the cost of low scalability. When a more scalable implementation is needed, typically solutions require that the system is restructured so that blocking operations (like I/O) are offloaded to a different process, which is at the same time told what to do upon completion of the requested operation (the so called Reactor pattern). Refactoring a system in this manner is almost never trivial nor straightforward. We believe that concurrency and parallel processing do not have to be sacrificed to this extent to give the programmer a safe sequential programming model, as explained in more detail in Section 4.1. 3.3 The Meteor Approach Meteor [5] is a newer web framework for fast and convenient development of modern web applications. Meteor has been rapidly gaining popularity. It is a pure JavaScript implementation (both server and client have to be written in JavaScript) of an event-driven (publish/subscribe) system which also automatically propagates updates to all connected clients whenever the shared data changes. Unlike SUNNY, Meteor focuses on providing a platform for automatic data propagation, whereas SUNNY is designed to also handle other aspects of the system, including richer models for shared data, GUI scaffolding, automated support for concurrency, etc. Specifically, Meteor does not offer much structure to help design the system, nor does it have rich models of the underlying shared data. The data model in Meteor consists of a number of flat collections (corresponding directly to database tables), with no type information, and no explicit relationship between different model classes. Rich models enable both software engineering benefits (like automated test generation and verification of end-to-end properties), as well as productivity benefits (like automated GUI scaffolding). 4. The SUNNY Approach A key idea of SUNNY is to make it possible to think about different events in isolation, and only in terms of modifications to the data model they entail. Therefore, in the design phase, the programmer does not have to think about other issues, such as how to update the user interface to reflect the changes, or even about security and privacy policies; these can be specified separately and independently from the core event model. Limiting the specification this way is what forces the programmer to focus on the core logic of the system first (hence reducing the chances of software bugs in those core parts of the system) and what enables us to provide a unified and overarching runtime environment for fully automated resource management and constant data access monitoring for security violations. The main components of SUNNY are: - a Domain Specific Programming Language - a Runtime Environment - an Online Code Generator - a Dynamic Template-Based Rendering Engine Instead of going into technical details about each of these components, our main intent for this paper is to provide a broader discussion of the big-picture goals and design behind SUNNY, illustrate its usefulness and practicality through examples, and argue for the benefits it brings to 2 In contrast to GUI scaffolding implemented in Rails, ours is not a one-off code generation approach: it is rather based on generic (application-agnostic) templates which get evaluated at runtime, so again, there is no problem of falling out of sync. software engineering best practices. We will next, therefore, walk through a sample execution of our system (still using the running IRC example) to better illustrate how the system works and how the benefits are achieved. Afterward, we will briefly describe each of the mentioned components using the concrete syntax of RED. 4.1 Sample Execution Consider a scenario in which a user initially opens the home page of our IRC application. This request is received by the web server via the HTTP GET method and placed in a processing queue (namely View Req Queue, Figure 3, top pipeline). From there, it is picked up by the View Renderer component, while the web server can immediately go back to serving incoming requests. Let us assume that the view corresponding to the home page is the inc template shown in Figure 4(a) and that the user is not logged in yet. These templates are written in the ERB language, which allows arbitrary Ruby expressions to be embedded inside the `<% %>` and `<%=` %>` marks (the difference being that only the latter produces a textual output, while the output of the former is ignored). The View Renderer, therefore, evaluates the “else” branch of the template, and returns a login page with two input fields (for email and password) and a “Sign-in” button. While rendering a template, the View Renderer also maintains a View Tree structure which holds a single node for each Ruby expression that was evaluated during the execution of the template (templates can invoke other templates, potentially creating a hierarchy of nodes). Each node stores a list of fields that were accessed while the corresponding expression was being evaluated. In the case of this example, there is only one node in that tree, and the only field that was accessed was the user field of the current client instance (during the evaluation of the “if” condition). On the client side, our JavaScript library automatically recognizes the “Sign-in” button by its data-trigger-event HTML5 attribute, and, according to its value, associates it with the SignIn event (which is a part of the previously imported Auth library (Figure 2)). More concretely, it assigns an “onclick” handler to it, so that when the button is clicked, the associated form (discovered via the data-params-form attribute) is submitted (via an Ajax call) as the parameters of the SignIn event. When the user clicks this button, the SignIn event is triggered and received on the server side via the bottom processing pipeline in Figure 3. The EventHandler then picks it up from the queue, checks its precondition, and if it holds (in this case it does, since the requires method is empty), proceeds to execute its postcondition. Assuming that the user entered valid credentials, the execution will assign value to the user field of the current client instance (the client instance is always implicit and denotes the machine which submitted the currently executing event). Any modification to the data model triggers an internal “data update” signal, which is placed in the Update Queue (the right-most pipeline in Figure 3). A component called Pusher is in charge of serving the Update Queue. Every time an update is received, it goes through a list of all connected clients and corresponding view trees, discovers which nodes could potentially be affected by the current update (by checking their list of field accesses), re-renders those nodes, updates the global Client → View Tree map, and pushes those changes to clients. On the client side, only the corresponding portion of the HTML DOM is replaced by the newly rendered text. In the running scenario, the only node that was stored for the current client was dependent on the user field, so only it has to be re-rendered. The new content is produced by executing the “then” branch, which amounts to rendering the user.html.erb template for the current user (the user object is by default available to the callee template via the user variable), and rendering the chat_room.html.erb template once for each room on the server (in this case the default variable name would be “chat_room”, but it is instead explicitly set to “room” via the :as option). The execution then continues in the same manner: clients continue to perform actions by triggering events from the domain, and the server keeps processing events, detecting changes in the data model, and re-rendering parts of the client views when needed. An explanation of how asynchronous message sending is declaratively specified directly in an HTML template (no separate JavaScript file), and without any Ajax code, is given in Section 4.4.3. To get a running version of this sample execution, if using RED, the programmer only needs to: - write the data, machine, and event models from Figure 1 (the security model is not necessary); - write the HTML templates from Figure 4; - deploy the application to a server running Ruby on Rails with our extensions; and - set the application home page to irc.html.erb (by configuring the root route in Rails). Comparing to implementing the same application in standard Rails: - in place of our data model, the programmer would write ActiveRecord model classes (one model class per record), which are more verbose and require more configuration (as discussed in Section 4.4.2); - in place of our machine model, the programmer would likely use in-memory classes and the Rails session storage (not affecting the complexity of the implementation); - in place of our event model, the programmer would write controllers of approximately the same complexity; - the HTML templates would remain the same, as well as the deployment process. Additionally, the Rails programmer would have to - write a database schema (discussed in Section 4.4.1), carefully following the Rails naming convention; - write a controller for each model class implementing the standard CRUD (Create, Read, Update, Delete) operations (again, certain naming conventions have to be followed); - configure routes for each controller; - decide on a third party library to use to implement the server push (pushing data updates to connected clients in real time); - implement server-side code that keeps track of what data each client is currently displaying; - implement server-side code that detects model changes (made during the execution of controllers); - implement server-side code that pushes data changes to each client whenever a piece of data currently being displayed on that client is changed; - implement client-side code that listens for data changes from the server; - implement client-side code that dynamically re-renders affected parts of the GUI whenever a data update is received. In both cases, a CSS file is necessary in order to make the GUI look pretty. While RED provides dynamic GUI updates for free, and for that does not require the programmer to write any JavaScript, it does not prevent him or her from doing so; RED comes with a client-side JavaScript library (see Section 4.4.3) which can be used to interact with the server-side, customize how the GUI gets updated (e.g., implement special visual effects or animations), asynchronously trigger events, etc. ### 4.2 Domain-Specific Programming Language We designed a domain-specific language for writing SUNNY models in order to better emphasize the key concepts of our paradigm. This language has strong foundations in the Alloy modeling language [33], a relational language based on first-order logic. Alloy is declarative, and as such, it is not executable per se; it is instead used primarily for modeling and checking logical properties of various kinds of systems. Most of Alloy’s expressive power comes from its relational base (including all the supported relational operators), which, however, can be efficiently executable in the context of object-oriented programs [57]. For example, the dot operator (‘.’) is actually a relational join, so an expression that fetches all users currently present in any chat room on a server can be written simply as Server.rooms.members. In RED, we implemented this language as an embedded DSL in Ruby. Concretely, each of record, machine, and event is just a function that takes a name, a hash of field name → type pairs, and a block; it (1) returns a Class having those field names as attributes, while storing the type information in a separate meta-class, (2) creates, in the current module, a constant with the given name and assigns the newly created class to it, and (3) if a block is given, evaluates that block in the context of that class. The block parameter can be used to define additional instance methods (as in Figure 2, method authenticate in record AuthUser), but also to define fields with more information than just name and type (e.g., the call to the owns function in Figure 1(a), which additionally specifies that whenever a chat room is deleted, the deletion operation should be propagated to all the messages referenced by that room via the messages field). Having this syntactic sugar (instead of just using the built-in class keyword) provides a convenient way of specifying typed fields, but more importantly, being in charge of class generation also gives us an easy way to hook into all field accesses, where we perform the necessary policy checks. We override the const_missing method, so that unresolved types are converted to symbols at declaration time; we only require that all types can be resolved at runtime. Note that, however, none of our language features mandated this implementation choice; a different implementation targeting a different platform is possible. 4.3 Runtime Environment One of our main goals is to relieve the programmer of having to explicitly implement a distributed system, i.e., explicitly synchronize multiple processes, handle inter-process communication, manage queues and messages, ensure data consistency, and a number of other tasks typical for distributed and concurrent programming. By introducing a specific programming model (as described previously), we tailored a generic runtime environment to automate all those tasks. The runtime implements a standard message-passing architecture, a well-known and widely used idiom for designing distributed systems, which we use to dispatch events and data updates between entities (Figure 3). Another important role of the runtime environment is to automatically check and enforce privacy policies at runtime. Policies are checked at every field access attempted by the user-defined code: all relevant restriction rules are discovered and applied. Instead of throwing an exception when the access is forbidden, an empty relation is returned. This is a legal operation, as all room members), however, must still be hidden or their values properly filtered out, which our runtime does automatically (simply by returning an empty relation every time access is forbidden). This illustrates the declarative nature of our privacy policies, and how the runtime can automatically enforce them. It also shows that the operational code (e.g., event handlers, embedded GUI formulas, etc.) usually can be written independently of privacy policies, and does not need to be updated when policies change. 4.4 Online Code Generator Many of the responsibilities of the runtime environment are enabled by the code automatically generated from the core models, on the fly, during the system initialization phase. In addition, we use code generation to automate various common tasks. Several of these tasks are briefly described next. 4.4.1 Database Migrations The richness of our data model makes it possible for us to handle data persistence fully automatically. This includes (1) generating and maintaining a database schema (discussed in this section), and (2) implementing an object-relational mapper (i.e., mapping domain objects onto that schema, discussed next in Section 4.4.2). A database schema provides a way to persist all relevant information from the domain model. Because the schema is always supposed to closely mirror the model, ideally it should not have to be written/programmed separately. In standard Rails, however, that is not the case; the schema exists as a standalone code artifact, and the programmer is in charge of maintaining it and keeping it in sync with the application model. Although Rails comes with automated generators that can create schema skeleton files from sim- Figure 5. Several different snippets of automatically generated code for the IRC example: (a) full database migration file (in Ruby), creating a schema for the persistent entities from the domain (records and machines), and (b) excerpt from the translation of domain records to ActiveRecord classes, with mappings of fields to database columns and field interceptors. class UpdateTables < ActiveRecord::Migration def change create_table :auth_clients do |t| t.column :auth_token, :string t.references :user t.column :user_type, :string t.column :type, :string end t.timestamps create_table :auth_servers do |t| t.column :name, :string t.column :email, :string t.column :password_hash, :string t.column :status, :string t.column :type, :string end t.timestamps create_table :chat_rooms do |t| t.column :name, :string t.column :server_as_room_type, :string t.column :type, :string end t.timestamps create_table :msgs do |t| t.column :text, :text t.column :data t.column :session_id, :null end t.timestamps create_table :sessions do |t| t.string :session_id, :null => false t.string :data t.timestamps end add_index :sessions, :session_id add_index :sessions, :updated_at end end class Msg < Red::Model::Record # < ActiveRecord::Base attr_accessible :text belongs_to :sender, :class_name => "User", :foreign_key => :sender_id belongs_to :chat_room_as_message, :class_name => "ChatRoom", :foreign_key => :chat_room_as_message_id, :inverse_of => :messages # interceptors for field getters and setters def text() intercepread: {text} [super] end def text=(val) interceptwrite: {text, val} [super] end end class ChatRoom < Red::Model::Record # < ActiveRecord::Base attr_accessible :name has_and_belongs_to_many :members, :class_name => "User", :foreign_key => :user_id, :association_foreign_key => :chat_room_id, :join_table => "chat_rooms_users_members" has_many :messages, :class_name => "Msg", :foreign_key => :chat_room_as_message_id, :dependent => :destroy belongs_to :server_as_room, :class_name => "Server", :foreign_key => :server_as_room_id, :inverse_of => :rooms # interceptors for field getters and setters def name() {intercept_read: {name}} [super] end def name=(val) {intercept_write: {name, val}} [super] end end class User < RedLib::Web::Auth::AuthUser # < Red::Model::Record attr_accessible :status has_and_belongs_to_many :chat_rooms_as_message, :class_name => "ChatRoom", :foreign_key => :user_id, :association_foreign_key => :chat_room_id, :join_table => "chat_rooms_users_members" has_many :msgas_sender, :class_name => "Msg", :foreign_key => :sender_id, :inverse_of => :sender has_many :clients_as_user, :class_name => "Client", :foreign_key => :user_id, :inverse_of => :user # interceptors for field getters and setters def status() intercept_read: {status} [super] end def status=(val) intercept_write: {status, val} [super] end end ple name → type pairs, they only work for primitive types and scalar fields; for more advanced features like type inheritance and non-scalar fields (many-to-many relations), the programmer has to manually extend the generated schema file in such a way that it works with the object-relational mapper on the other side. Figure 5(a) gives a full listing of the database schema (in the form of a Ruby migration class, standard for the Rails framework) that REd automatically generated for the IRC example. This schema supports all the features of the model, so the programmer does not even have to look at it. ActiveRecord (the object-relational mapper used in Rails and in our framework) implements the single table inheri- strategy for handling inheritance. Hence, for each base type we generate one table with columns for all fields of all of its subtypes, plus an extra string-valued column (named :type) where the actual record type is stored. For example, in the :auth_users table, the first three columns correspond to fields from AuthUser and the fourth column is for the single field from User. Furthermore, in every other table referencing such a table, an additional "column type" column must be added to denote the declared type of the corresponding field, as in the :msg table (columns :sender and :sender_type). When a record field type is of arity greater than 1, a separate join table must be created to hold that relation. This is the case with the ChatRoom.members field (referencing a set of Users). The corresponding join table (:chat_rooms_users_members) stores all tuples of the :members relation by having each row point to a row in the :chat_rooms table and a row in the :users table. In the special case when a field owns a set of records (e.g., field ChatRoom.messages, meaning that a given message can be referenced via that field by at most one chat room), instead of a join table, a referencing column is placed in the table corresponding to the type of that field (the :chat_room_as_message column in table :msgs). The last `create_table` statement in Figure 5(a) simply creates a table where the session data will be stored, and is independent of the domain data model. Despite being mostly straightforward, writing migrations by hand is still tedious and time consuming, and, for developers new to Rails, can often be a source of mysterious runtime errors. Even after those initial errors have been fixed, the gap between the schema and the application model still remains. RED eliminates all these issues by having a single unified model of the system and automatically driving various implementation-level technologies (such as the database schema maintenance) directly from it. ### 4.4.2 ActiveRecord Classes and Reflections As we explained in Section 4.2, the record keyword in RED is actually implemented as a Ruby function that creates a new class and assigns a named constant to it. Here we discuss the generated record classes (listed in Figure 5(b)) in more detail. ActiveRecord provides "reflections" for specifying associations between models (i.e., records in our terminology). Primitive fields are declared with `attr_accessible` (e.g., ChatRoom.name), one-to-many associations with `has_many` on one side and `belongs_to` on the other (e.g., ChatRoom.messages), and many-to-many associations with `has_and_belongs_to_many` (e.g., ChatRoom.members). Various options can be provided to specify the exact mapping onto the underlying database schema. As with migration generators, Rails provides generators for ActiveRecord model classes as well, but again, with limited features and capabilities. While most of the schema-mapping options (e.g., `:foreign_key`, `:join_table`, `:association_foreign_key`) can be omitted if the naming convention is followed when the schema is written, the programmer still has to manually write these reflections for all but primitive fields. Furthermore, ActiveRecord requires that reflections are written on both sides of an association, meaning that each non-primitive field has to have its inverse explicitly declared in the opposite class (which is another step that our system eliminates). Finally, even though the database schema and the model classes are coupled, there is nothing that keeps them in sync in standard Rails. This not only makes the development process more cumbersome and error-prone, but also makes it difficult to perform any system redesign or refactoring. Controlling the generation of model classes also lets us intercept all field accesses, where we perform all the necessary security checks, detect changes to the data model for the purpose of updating client views, wrap the results of getter methods to enable special syntax (e.g., the Alloy-style relational join chains), etc. ### 4.4.3 JavaScript Model for the Client-Side One of the main ideas behind SUNNY is to have a single unified model of the system, and a model-based programming paradigm that extends beyond language and system boundaries. In RED, we wanted to preserve this idea and enable the same kind of model-based programming style on both the server side and the client side, despite the language mismatch. More concretely, we wanted to provide the same high-level programming constructs for instantiating and asynchronously firing events in JavaScript on the client side, as well as constructing and manipulating records. To that end, we translate the system domain model into JavaScript, to make all the model meta-data available on the client side. We also implemented a separate JavaScript library that provides prototypes for the generated model classes, as well as many utility operations. Figure 6 gives an excerpt from the translation of the IRC application’s domain model. Up on the top are constructor functions for all records, machines, and events. The `mk_record` and `mk_event` functions (part of our library) take a name and (optionally) a super constructor, and return a constructor function with the given name and a prototype extending the super constructor’s prototype. This is followed by the meta-data for each record, machine, and event, which contains various information about the type hierarchy, fields, field types, etc. All this information is necessary for our library to be able to provide generic and application-agnostic operations. One such operation we mentioned before, in Section 4.1, where we talked about how DOM elements having the `data-trigger-event` attribute are automatically turned into event triggers. Let us finally take a look at how asynchronous message sending is implemented on the client side, that is, how such an operation can be specified declaratively, directly in an HTML template file, without writing any Ajax code. The chat_room.html.erb template (Figure 4(c)) contains a text input field and a send button, with the intention to trigger the SendMsg event and send whatever message is in the text input field whenever the send button is pressed. To achieve that, we added three HTML5 data attributes to the send button element; we used data-trigger-event, as before, to denote the type of the event, and two data-param attributes to specify the two mandatory arguments of the SendMsg event, room and msgText. The value for the room parameter is known statically—it is exactly the chat room object for which the chat_room template is being executed. However, that value is an object, so it is not possible to directly embed it in the template as a string-valued attribute. Instead, we inline a small piece of JavaScript code that, when executed, creates an equivalent room on the client side. Knowing the id of that room, and having a full replica of the model classes on the client, that code is as simple as ```javascript new ChatRoom(<%=room.id%>); ``` we only need to tell our JavaScript library that the value we are passing is not a string, but code, by enclosing it in $()4. The value for the msgText parameter is not known statically, and has to be retrieved dynamically when the user presses the send button. As in the previous case, we can specify that by inlining a piece of JavaScript that finds the input text field by its id (using the jQuery syntax $('#<id>') and reads its current value (by calling the .val() function). An alternative approach to declaratively specifying event parameter bindings, that would require no JavaScript from the programmer, would be to somehow annotate the input text field (e.g., again by using the HTML5 data attributes) as the designated value holder for the msgText event parameter. A drawback of such an approach is that, in general, it leads to code fragmentation, where a single conceptual task can be specified in various different (and not predetermined) places, potentially significantly reducing code readability. For that reason, we thought it was better to have all the code and specification in one place, even if the user has to write some JavaScript. ### 4.5 Dynamic Template-Based Rendering Engine To go along with this declarative approach for programming the core business logic of an event-based system, RED implements a mechanism for declaratively building graphical user interfaces. The main responsibility of this mechanism is to automatically and efficiently update and re-render the GUI (or relevant parts of it) when a change is detected in the data model. This idea is similar to the concept of “data bindings” (e.g., [52, 56]), but is more general and more flexible. --- 3 Our JavaScript library actually does not complain if a numeric id is passed where a record object is expected—having all the meta-model information available, it can easily find the event parameter by the name, look up its type, and reflectively construct an instance of that type. Instead of using this shortcut (which works only for record objects) in the main text, we used a more verbose version to illustrate a more general approach and all of its power and flexibility. 4 Note that this dollar sign has nothing to do with the jQuery dollar sign; it is rather our own syntax for recognizing attribute values that should be computed by evaluating the JavaScript code inside $(). --- Figure 6. Excerpt from the JavaScript translation of the domain model, which the client-side code can program against. Traditionally, GUIs are built by first constructing a basic visual layout, and then registering callbacks to listen for events and dynamically update bits and pieces of the GUI when those events occur. In contrast, we want the basic visual layout (like the one in Figure 4) to be sufficient for a dynamic and fully responsive GUI. In other words, we want to let the designer implement (design) a single static visualization of the data model, and from that point on rely on the underlying mechanisms to appropriately and efficiently re-render that same visualization every time the underlying data changes. To implement this approach, we expand on the well-known technique of writing GUI widgets as textual templates with embedded formulas (used to display actual values from the data model) and using a template engine [7] to evaluate the formulas and paste the results in the final output. To specify input templates, we use the ERB language (the default template language in Rails) without any modifications. Unlike the existing renderer for ERB, however, our system detects and keeps track of all field accesses that happen during the evaluation of embedded formulas. Consequently, the result of the rendering procedure is not a static text, but a view tree where embedded formulas are hierarchically structured and associated with corresponding field accesses (as illustrated in Section 4.1). That view tree is what enables the permanent data bindings—whenever the underlying data changes, the system can search the tree, find the affected nodes, and automatically re-render them. In the context of web applications, only a textual response can be sent back to the client. Therefore, when an HTTP request is received, the associated template is rendered, and a view tree is produced. The view tree is saved only on the server side. The client receives the same plain-text result that the standard ERB renderer would produce along with some meta-data to denote node delimiters; the browser renders the plain-text response, and our client-side JavaScript library saves the meta-data. When a data change is detected on the server-side, the server finds and re-renders the affected nodes and pushes plain-text node updates to corresponding clients; each client then, already having the meta-data, knows where to cut and paste the received update to automatically refresh the GUI. 5. Automated Reasoning and Analysis Although SUNNY simplifies the development of interactive web applications, and by construction eliminates a whole class of concurrency bugs, it does not eliminate all possible bugs. The user implementation of events can still fail to satisfy the functional requirements of the application. Applying the standard software quality assurance techniques to SUNNY programs is, therefore, still of high importance. We designed SUNNY with this in mind, and in this section we discuss how our programming paradigm is amenable to techniques like automated testing, model checking, and software verification. 5.1 Testing Testing is the most widely used method for checking program correctness. Testing an event-driven system is both challenging and time consuming, because one needs to generate realizable traces (sequences of events). The challenging part in discovering realizable traces is that the preconditions need to hold for each event in the sequence, and the time-consuming part is that the traces can be long, and therefore, there can be too many of them to explore manually. Having both preconditions and postconditions of each event formally specified in our event model allows us to use symbolic-execution techniques [36], and build on recent successes in this domain [69], to discover possible traces automatically. A symbolic execution engine would start with an empty path condition; at each step, the engine would consider all events from the model and discover the ones that are realizable from the current state (this can be done by using an automated SMT solver [12, 19] to check if there exists a model in which both the current path condition and the event’s precondition are satisfied). When an event is found to be realizable, a new state is created and the event’s postcondition is appended to the path condition for the new state. Since at each step of this process multiple events may be found to be realizable, the algorithm proceeds by exploring the entire graph, effectively yielding a state diagram by the end. Figure 7 depicts the state diagram extracted from the running example (Figure 1). Each node in the diagram describes a symbolic state and each edge describes a transition that can happen when the condition on the edge is satisfied and the event is executed. For example, moving from the initial state to the next state requires that a user initiates a SignIn event and provides a correct name and password. This transition results in the execution of the postcondition of the SignIn event. In addition to automated testing of traces, a state diagram can be used to automatically create a test environment – the state necessary before the execution of a test – for all unit tests. Considering Figure 1, if a developer wants to test the SendMsg event, there should be a registered user in a room. To create such a state, a sequence of other events have to be executed before SendMsg. Inferring from Figure 7, SignIn and CreateRoom event handlers must be executed. Executing these events requires solving the precondition of each event on the path. Functional unit testing of events also becomes easier. A black-box unit test for the SendMsg event would have to check that the message sent indeed gets added to the list of messages of the given room, that it gets added to the very end of the list, that no other messages get dropped from that list, etc. In SUNNY, this can be done directly, without having to set up any mock objects, e.g., to abstract the network and the actual peer points, as no network is required. In a traditional event-driven system, an implementation of a single functional unit is often fragmented over several classes. Consider the `SignIn` event: the user sends his or her credentials to the server, the server checks the authenticity, sends back the result, and based on that result, both the server and the client update their local state. In the traditional model, the client-side code can initiate the event (by sending a message to the server), and schedule a continuation that will run when a response is received. The continuation, which is typically a separate procedure, then implements the logic for updating the local state based on the server response. Such fragmented code is very hard to test as a unit, so it is often turned into an integration test, and integration tests are typically more laborious to write and require more elaborate setup. In SUNNY, because of the shared view of the global data, there is no need for such fragmentation; the event handler can be a single procedure that updates only the global data model, meaning that it can easily be tested as a unit. ### 5.2 Model Checking Loosely coupled events without explicit synchronization and communication allow model checking to scale. The source of non-determinism in SUNNY models is the order in which events are executed. Because of the non-determinism in scheduling, a model may exhibit different behavior for the same input (i.e., the same values of event parameters) with a different order of event execution. The goal of software model checking is conceptually to explore all orders to ensure the correct execution. Note that the exploration need consider only the semantics of the model and not the semantics of the underlying runtime system. Based on our prior experience with model checking actor programs [39, 65], X10 programs [27], and database applications [26], we believe that an efficient model checking approach can be developed for our new paradigm. For example, a model checker can be used to check end-to-end properties for all scenarios that the system can possibly exhibit. One such property could be “it is impossible that at one point two different chat rooms have two different users with the same name”. The tool can automatically either confirm that the property in question always holds, or find a scenario (i.e., a sequence of events leading to a state) in which the property is violated. #### 5.3 Verification and Program Synthesis The technique of discovering realizable sequences of events can also be used to synthesize higher-level operations. For example, a novice IRC user may wonder what are the steps that need to be taken in order to post a message in a chat room. Given such an end-goal, a tool can discover that one possible scenario to achieve that goal is to first `SignIn`, then `JoinRoom`, and finally `SendMsg`. An alternative solution would be to `CreateRoom` instead of `JoinRoom` at the second step. These scenarios can be displayed to the designer and serve as a guide to better understanding possible functionalities of the system (which can be especially useful for bigger systems with many possible events). ### 6. Discussion In the previous sections we described several new techniques and concepts this paper proposes to research and develop. In this section we discuss some benefits that directly follow from or are enabled by those techniques. It can be argued that designing a system around a given property is the best way to ensure that the system correctly implements that property [34]. This paper is certainly in that spirit since it encourages the programmer to carefully design and specify the core part of an event-driven system, i.e., the event model. Furthermore, the programmer does so mostly declaratively, by specifying key properties of events in isolation, without being bogged down by the operational details of the entire distributed system. We believe that, in most cases, even the event effects (postconditions) might be specified fully declaratively, and yet efficiently executed. We showed previously that declarative specifications can be executable (within a traditional object-oriented language) with certain performance handicaps [48]. Moreover, Near and Jackson [53] showed that, in a setting of a typical web application, most server-side actions (or “events” in our terminology) boil down to (possibly conditional) assignments to variables, which is still declarative, but much easier to execute efficiently. They also showed how this fact can be exploited to build a scalable verifier, which is of comparable complexity to executing a declarative postcondition in the first place. Our system also lends itself to model-based user interface software tools, which, by definition, take a high-level declarative model of an interactive system and help the programmer build a user interface (UI) for it (either through an integrated development environment or automated code generation) [61]. For example, a UI can be automatically gener- ated from a SUNNY model that contains generic widgets for querying existing records, creating new instances of records defined in the model, creating associations between existing records via the fields defined in the model, triggering events, and so on, all while respecting the model-defined invariants, event preconditions, and privacy policies. Some existing implementations of scaffolding can already generate a graphical UI that supports standard CRUD operations (create, read, update, delete) for all data model classes; in contrast, with SUNNY models scaffolding of events is supported, thus enabling a fully generic user interface that actually covers the full functionality of the system. 7. Evaluation 7.1 Comparison with a Web Application in Meteor We implemented the IRC example in Meteor, a framework designed specifically for fast development of modern, interactive web applications, and compared it to the presented implementation in SUNNY. We make no strong claims based on this simple case study; we only quantify the effort needed to develop this example (in terms of the number of lines of code) and report on our experiences using both systems. SUNNY and Meteor share the idea that a single data model should be used across the application, even in a distributed setting, and that any updates to it should be automatically propagated to all connected nodes. The main difference is in the representation of the shared data. Meteor relies on MongoDB [1], a NoSQL database which stores data as untyped JSON documents, meaning that the database schema is fully dynamic, and can change anytime. In contrast, models in SUNNY are strongly typed, which is essential to achieving a precise code analysis, but also necessary for implementing various tools, such as the GUI builder. For comparison, our implementation of the IRC example in Meteor is given in Figure 8. The number of lines of code is about the same, but we believe that SUNNY models tend to be more readable because they make much more explicit both conceptual and structural information about the system. Furthermore, because all the concepts in SUNNY models have a precisely defined semantics, these models can serve as a good documentation on their own. Another consequence of lack of structure in the Meteor code is the tendency to tightly couple business logic and GUI code. For example, events are often directly tied to JavaScript UI events (e.g., lines 6, 24, 44), and their handlers can fetch values directly from the DOM elements (e.g., lines 7, 8, 25, 26). We believe that our model-based paradigm has a clear advantage over the dynamic NoSQL model when it comes to applying tools and techniques for various code analyses. In other words, Meteor is mainly focused on providing a platform where data updates are automatically propagated to relevant clients; we are also concerned about the software engineering aspects of the system, its overall design, correctness, testability, and analyzability, as described in Section 5. Most of the ideas from that section would be difficult to apply to Meteor programs. ``` Rooms = new Meteor.Collection("rooms"); if (Meteor.isClient) { // Create Room Template.irc.events({ 'click input.createRoom': function () { var roomName = $('*roomName').val(); var userName = $('*userName').val(); // Ignore empty names if (roomName) { var room = Rooms.findOne({name: roomName}); if (room == undefined) { Rooms.insert({ name: roomName, creator: userName, members: [userName], messages: []}); Session.set("userName", userName); Session.set("user_id", userName); } } }, }); // Join Room Template.irc.events({ 'click input.joinRoom': function () { var roomName = $('*roomName').val(); var userName = $('*userName').val(); // Check if room exist if (room != undefined) { var room = Rooms.findOne({name: roomName}); if (room || undefined) { var userRoom = Rooms.findOne({ members: { $in: [userName] } }); if (roomRoom == undefined) { Rooms.update({ _id: room._id, [Session.get("user_id")]: [userName]}); Session.set("userName", userName); } } } }, }); // Send a Message Template.irc.events({ 'click input.send': function () { var userName = Session.get("userName"); // Create a message to be sent var message = Session.get("userName") + ": " + $('*message').val() + "\n"; var room = Rooms.findOne({ members: { $in: [Session.get("userName")]} }); Rooms.update({ _id: room._id, [push: {messages: message}]}); }, }); ``` Figure 8. Implementation of the IRC example in Meteor. 7.2 Comparison with a Client-Server System in Java In this section, we quantify the effort it took us to build a relatively simple real-time, multi-player game in Java, and discuss how using a technology like SUNNY would significantly simplify certain steps in the process. In fact, the challenges we encountered while developing this game actually inspired the SUNNY project. SNAP’N’SHOT [2] is a twist on paintball. In paintball, players carry paint guns and shoot one another by firing paint bullets. In SNAP’N’SHOT, players carry cell phones and shoot one another by taking pictures. The game targets the Android platform and is implemented entirely in Java as a client-server system. The main challenge developing this game was establishing a solid architecture for concurrent event processing and real-time client notification. The effort to manually implement a message passing backbone, synchronize accesses to shared data, maintain connections alive, and keep all the clients updated resulted in 4000 lines of Java code, as well as several tough concurrency bugs along the way. All that effort could be reduced to writing a simple model in SUNNY, similar to the one we used for the IRC example. SNAP’N’SHOT defines events that are equivalent to CreateRoom, JoinRoom and SendMsg (except that they are called CreateGame, JoinGame, and ShotFired); its data model also matches that of IRC quite closely. We have implemented a prototype of SUNNY for client-server Java programs (communicating over sockets), but we have yet to retrofit the implementation of SNAP’N’SHOT to use the new technology. 8. Related Work 8.1 Event-Driven Programming There are two main styles of building distributed systems: (1) asynchronous or event-driven, and (2) using threads and locks. There has been a lot of debate over whether one is superior to the other. Dabek et al. [17] convincingly argue that event-driven systems lead to more robust software, offer better performance, and can be easily programmed given an appropriate framework or library. There exist many frameworks or libraries designed to support the event-driven programming paradigm. They are all similar to ours in that they provide an easy, event-driven way of writing distributed applications. Meteor, previously discussed, is one such library; another popular one is Node.js [66]. They eliminate the need to manually manage threads and event queues, but typically do not provide an abstract model of the system, amenable to formal analysis. Approaches like TinyGALS [14] and ESP∗ [64], which focus on programming embedded systems, also provide special support for events. The TinyGALS framework additionally offers a structured model of the whole system and also implements global scheduling and event handling. It uses code generation to translate models into an executable form, unlike ESP∗ which embeds the Statechart [31] concepts in a high-level general-purpose (Java-like) language. ESP∗ mainly focuses on correctly implementing the Statechart semantics. Tasks [23] provides language support for complex tasks, which may consist of multiple asynchronous calls, to be written as a single sequential unit (procedure), without having to explicitly register callback functions. This is achieved by a translation of such sequential procedures to a continuation-passing style code. Tasks is not concerned with specifying the top-level event model of the system, and is orthogonal to our framework. The implicit invocation mechanism [25] provides a formal way of specifying and designing event-driven systems. Events and the bindings of events to methods (handlers) are decoupled and specified independently (so that the handler can be invoked “implicitly” by the runtime system). This provides maximum flexibility but can make systems difficult to understand and analyze. In our framework, we decided to take the middle ground by requiring that one event handler (the most essential one, the one that implements the business logic of the system by updating the core data model) is explicitly bound to an event. Functional Reactive Programming [21] is a programming paradigm for working with mutable values in a functional programming language. Its best known application is in fully functional, declarative programming of graphical user interfaces that automatically react to changing values, both continuous (like time, position, velocity) and discrete (also called events). Implementations of this paradigm include Elm [16] (a standalone language that compiles to HTML, CSS and JavaScript) and Flapjax [45] (an implementation embedded in JavaScript, designed specifically to work with Ajax). Our approach to self-updating GUI can also be seen as a specific application of functional reactive programming. 8.2 Data-Centric Programming Another, increasingly popular, method for specifying distributed data management is using Datalog-style declarative rules and has been applied in the domain of networking (e.g., Declarative Networking [42], Overlog [41]), distributed computing (e.g., the BOOM project [10], Netlog [29]), and also web applications (e.g., Webdamlog [9], Reactors [22], Hilda [70], Active XML [8]). The declarative nature of Datalog rules makes this method particularly suitable for implementing intrinsically complicated network protocols (or other algorithms that have to maintain complex invariants); manually writing an imperative procedure that correctly implements the specification and respects the invariants is a lot more difficult in this case. By contrast, we focus on applications that boil down to simple data manipulation in a distributed environment (which constitutes a large portion of today’s web applications), and one of our goals is to provide a programming environment that is easy to use by even non-expert programmers who are already familiar with the object-oriented paradigm. 8.3 Code Generation and Program Synthesis The idea of using increasingly higher-level abstractions for application programming has been a common trend since the 1950s and the first Autocoder [28] systems which offered an automatic translation from a high-level symbolic language into actual (machine-level) object code. The main argument is that software engineering would be easier if pro- grammers could spend their time editing high-level code and specifications, rather than trying to maintain optimized programs [11]. Our approach is well aligned with this idea, with a strong emphasis on a particular and widely used domain of web application programming. Executable UML [44] (xUML) also aims to enable programming at a high level of abstraction by providing a formal semantics to various models in the UML family. Model-driven development approaches based on xUML (e.g., [46, 47]) translate the UML diagrams by generating code for the target language, and then ensure that the diagrams and the code are kept in sync. Our system is conceptually similar, and it also follows the model-driven development idea, but instead of using code generation to translate models (diagrams) to code, we want to make models first-class citizens and to have an extensive framework that implements the desired semantics by essentially interpreting the models at runtime (an actual implementation may generate and evaluate some code on the fly to achieve that). Minimizing the amount of auto-generated code makes the development process more convenient, as there is no need to regenerate the code every time the model changes. Similar to code generation, the main goal of program synthesis is also to translate code from a high-level (often abstract, declarative) form to a low-level executable language. Unlike code generation, however, a simple translation algorithm is often not sufficient; instead, more advanced (but typically less efficient) techniques (e.g., search algorithms, constraint solving, etc.) have to be used. The state of the art in program synthesis focuses on synthesizing programs from various descriptions, e.g., sketches [63], functional specifications [37], input-output pairs [32], graphical input-output heaps [62], or first-order declarative pre- and post-conditions [40]. The core of our framework is a little further from the traditional program synthesis techniques; although it does aim to provide a high-level surface language for specifying/modeling various aspects of the system (events, privacy policies, GUI templates), it does not perform any complex search-based procedure to synthesize a piece of code. Given the declarative and formal nature of our models, however, program synthesis is still relevant to this work, as it might be applied to implement some advanced extensions, e.g., to synthesize higher-level operations from basic events (as briefly discussed in Section 5). 8.4 Declarative Privacy Policies In their most general form, policies are used to map each user (subject), resource(object) and action to a decision, and are consulted every time an action is performed on a resource by a user [38]. In our framework, resources correspond to fields, actions correspond to field accesses\(^5\), and the user is the entity executing the action. Systems for checking and ensuring privacy policies are typically based either on Access Control Lists (ACL) or Information Flow (IF). ACLs attach a list of permissions to concrete objects, whereas IF specifies which flows (e.g., data flowing from variable \(x\) to variable \(y\)) are allowed in the system. In both cases, when a violation is detected, the operation is forbidden, for example by raising an exception. Our security model is more in the style of access control lists, in the sense that we attach policies to statically defined fields (as opposed to arbitrary pieces of data), but it has a flavor of information flow as well, since we automatically check all data flowing to all different machines and ensure that no sensitive information is ever sent to a machine that does not have required permissions (which, in our system, means that there is no policy that explicitly restricts that access). Similar to the access modifiers in traditional object-oriented languages (e.g., private, protected, public, etc.), our model also focuses on specifying access permissions for various fields. However, the difference is that our permission policies are a lot more expressive and more flexible than static modifiers, and can also depend on the dynamic state of the program. In addition, they are completely decoupled from the data model, so the policies can be designed and developed independently. Information flow systems either rely on sophisticated static analysis to statically verify that no violating flows can exist (e.g., Jif [50, 51]), or dynamically labeling sensitive data and tracking where it is flowing (e.g., RESIN [72] or Dytan [15]). Unlike most other information flow systems, Jeeves [71] allows policies that are specified declaratively and separately from the rest of the system, and instead of halting the execution when a violation is detected, it relies on a runtime environment to dynamically compute values of sensitive data before it is disclosed so that all policies are satisfied. This approach is similar to our serialization technique when we automatically hide the sensitive field values before the data is sent to a client. Margrave [18, 24, 54] implements a system for analyzing policies. Similar to our system, Margrave policies are declarative and independent of the rest of the system (which they call “dynamic environment”). Their main goal, however, is to statically analyze policies against a given relational representation of the environment, and to check if a policy can be violated in any possible (feasible) scenario, whereas we are only interested in checking field accesses at runtime. To enable efficient analysis, the Margrave policy language is based on Datalog and is more restrictive than the first-order logic constraints that we allow in our policies. --- 5 Our policy language currently does not allow differentiating between reads and writes, but it could; we will consider adding that extension if we encounter examples where that distinction proves to be necessary. Attribute-based access control (ABAC) adds attributes \((name \rightarrow value)\) pairs to any entity in the system (e.g., user, resource, subject, object, etc.) so that policies can be expressed in terms of those attributes rather than concrete entities. Our system can be viewed as an instantiation of this model: our fields can be seen as attributes, machines as subjects, and records as resources; both records and machines can have fields, and policies are free to query field values. Many other ABAC systems have been designed and implemented (e.g., \([49, 67, 73]\)), each, however, using somewhat different model from the other. Jin et al. \([35]\) recently proposed a formal ABAC model to serve as a standard, and used it to express the three classical access control models (discretionary \([59]\), mandatory \([58]\), and role-based \([60]\)). ### 8.5 GUI Builders Our dynamic template engine for building graphical user interfaces, combines two existing techniques: data binding and templating. Data binding allows select GUI widget properties to be bound to concrete object fields from the domain data model, so that whenever the value of the field changes, the widget automatically updates its property. Changes can optionally be propagated in the other direction as well, that is, when the property is changed by the user, the corresponding field value gets updated simultaneously. Templating, on the other hand, takes a free-form text input containing a number of special syntactic constructs supported by the engine which, at the time of rendering, get dynamically evaluated against the domain data model and get inlined as strings in the final output. Such constructs can include embedded expressions (formulas), control flow directives (if, for loops, etc.), or, in a general case, arbitrary code in the target programming language. This adds extra flexibility, as it allows generic programming features to be used in conjunction with static text, enabling widgets with dynamic layouts to be defined. Even though existing data binding implementations (e.g., WPF and their textual UI layout language XAML \([52]\) for .NET, UI binder \([56]\) for Android, JFace \([30]\) for Java, Backbone \([55]\) for JavaScript) allow for textual widget templates, those templates are typically allowed to contain only simple embedded expressions (e.g., a path to an object’s field), only at certain positions in the template (to provide bindings only for select widget properties). No control structures are allowed, which makes it difficult to design a widget that chooses from two different layouts depending on the state of the application. Conversely, existing template engines (e.g., ASP \([43]\) for .NET, Haml \([4]\) and ERB \([3]\) for Ruby, FreeMarker \([68]\) for Java) provide all that extra flexibility, but do not preserve data bindings, making it difficult to push changes to the client when the model changes. In this work, we combine these two techniques, to achieve the flexibility of generic template engines and still have the luxury of pushing the changes to the clients and automatically re-rendering the UI. The main reason why that makes the problem more difficult than the sum of its parts is the fact that formulas in the template can evaluate to arbitrary elements of the target language (e.g., HTML), including language keywords, special symbols, tag names, etc. This is unlike the existing UI frameworks with data-bindings, where all bindings are assigned to (syntactically strictly defined) widget properties. ### 9. Conclusion Advances in web frameworks have made it much easier to develop attractive, featureful web applications. Most of those efforts are, however, mainly concerned with programming servers and their clients in isolation, providing only a set of basic primitives for intercommunication between the two sides, thus imposing a clear boundary. We believe that there is an entire class of web applications, and distributed programs in general, for which that boundary can be successfully erased and removed from the conceptual programming model the programmer has to bear in mind. SUNNY is a generic programming platform for developing programs that fall into that class. ### Acknowledgments This material is based upon work partially supported by the National Science Foundation under Grant Nos. CCF-1138967, CCF-1012759, and CCF-0746856. We would like to thank anonymous reviewers for their thoughtful comments on the draft of this paper. ### References [2] SNAP’N’SHOT home page. \http://www.snapnshot.me/\.
{"Source-Url": "http://dspace.mit.edu/openaccess-disseminate/1721.1/86924", "len_cl100k_base": 15578, "olmocr-version": "0.1.49", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 70199, "total-output-tokens": 20463, "length": "2e13", "weborganizer": {"__label__adult": 0.0003185272216796875, "__label__art_design": 0.0002892017364501953, "__label__crime_law": 0.0002148151397705078, "__label__education_jobs": 0.0005922317504882812, "__label__entertainment": 5.27501106262207e-05, "__label__fashion_beauty": 0.00012254714965820312, "__label__finance_business": 0.00013911724090576172, "__label__food_dining": 0.0002593994140625, "__label__games": 0.00040531158447265625, "__label__hardware": 0.0005321502685546875, "__label__health": 0.0002617835998535156, "__label__history": 0.00019240379333496096, "__label__home_hobbies": 6.312131881713867e-05, "__label__industrial": 0.00023853778839111328, "__label__literature": 0.00019276142120361328, "__label__politics": 0.00018680095672607425, "__label__religion": 0.000377655029296875, "__label__science_tech": 0.004032135009765625, "__label__social_life": 6.35981559753418e-05, "__label__software": 0.003833770751953125, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.0002213716506958008, "__label__transportation": 0.00040221214294433594, "__label__travel": 0.0001881122589111328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 89580, 0.01721]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 89580, 0.27453]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 89580, 0.89353]], "google_gemma-3-12b-it_contains_pii": [[0, 200, false], [200, 3254, null], [3254, 9127, null], [9127, 10355, null], [10355, 13425, null], [13425, 19215, null], [19215, 23520, null], [23520, 28699, null], [28699, 31662, null], [31662, 35807, null], [35807, 41733, null], [41733, 45393, null], [45393, 51256, null], [51256, 56393, null], [56393, 61874, null], [61874, 67548, null], [67548, 73482, null], [73482, 78954, null], [78954, 84299, null], [84299, 89580, null]], "google_gemma-3-12b-it_is_public_document": [[0, 200, true], [200, 3254, null], [3254, 9127, null], [9127, 10355, null], [10355, 13425, null], [13425, 19215, null], [19215, 23520, null], [23520, 28699, null], [28699, 31662, null], [31662, 35807, null], [35807, 41733, null], [41733, 45393, null], [45393, 51256, null], [51256, 56393, null], [56393, 61874, null], [61874, 67548, null], [67548, 73482, null], [73482, 78954, null], [78954, 84299, null], [84299, 89580, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 89580, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 89580, null]], "pdf_page_numbers": [[0, 200, 1], [200, 3254, 2], [3254, 9127, 3], [9127, 10355, 4], [10355, 13425, 5], [13425, 19215, 6], [19215, 23520, 7], [23520, 28699, 8], [28699, 31662, 9], [31662, 35807, 10], [35807, 41733, 11], [41733, 45393, 12], [45393, 51256, 13], [51256, 56393, 14], [56393, 61874, 15], [61874, 67548, 16], [67548, 73482, 17], [73482, 78954, 18], [78954, 84299, 19], [84299, 89580, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 89580, 0.0]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
73c528d9c7c7c3944d2316bc367d366ddc59c991
A Requirements Capture Method and its use in an Air Traffic Control Application * T. L. McCluskey (lee@zeus.hud.ac.uk) and J. M. Porteous (julie@zeus.hud.ac.uk) School of Computing and Mathematics, The University of Huddersfield, Huddersfield, West Yorks HD1 3DH Y. Naik (yogesh@cs.city.ac.uk) and C. N. Taylor (christa@cs.city.ac.uk) Department of Computer Science, The City University, Northampton Square, London EC1V OHB S. Jones (S.Jones@hertfordshire.ac.uk) Department of Computer Science, University of Hertfordshire, College Lane, Hertfordshire Keywords requirements capture, formal specification, knowledge representation *This research was sponsored by the NATS division of The Civil Aviation Authority Summary This paper describes our experience in capturing, using a formal specification language, a model of the knowledge-intensive domain of oceanic air traffic control. This model is intended to form part of the requirements specification for a decision support system for air traffic controllers. We give an overview of the methods we used in analysing the scope of the domain, choosing an appropriate formalism, developing a domain model, and validating the model in various ways. Central to the method was the development of a formal requirements engineering environment which provided automated tools for model validation and maintenance. Introduction The problems inherent in Software Engineering, especially those relating to the production of safety-critical systems, are nowadays all too apparent. Brown et al (in [1]), point out a number of these problems. We group them here into two classes: - The problems of understanding the customer’s initial requirements, and maintaining them as they change. - The problems inherent in the complexity, malleability and invisibility of software itself. That is, as well as being deceptively complicated, software is easily changed, and is not a physical artifact unlike the output of other engineering processes. In this paper we present an overview of a method for formally capturing requirements for knowledge-intensive systems that demand high integrity in their construction. We illustrate the method by its application in the development of a prototype decision support tool for air traffic control, involving separation of aircraft in the airspace over the North-East Atlantic. Using the method for this particular type of project, we believe that the problems in Software Engineering stated above can be addressed effectively. This work provokes issues that span both software engineering and knowledge based systems, including knowledge capture, formalisation, correctness, validation and automated reasoning. The benefits of formally specifying requirements, to do with precision, removal of ambiguity, automated manipulation and so on, have been well argued with the appearance of appropriate formal languages (e.g. MAL [2]). Up to now, few real industrial applications have been reported, and it seems often to be the case that researchers concentrate on particular kinds of applications in order to promote a particular formalism. Our approach to requirements specification is based on a formalism-independent method, where the choice of formalism and appropriate engineering environment is a feature of that method. In outline the method encompasses the following steps: - Scoping and Domain Analysis: determining the size and nature of the domain; • Formalism Choice and Customisation: selecting and customising a language and environment for domain capture; • Domain Model Capture: eliciting knowledge and capturing a model of the domain using the chosen formalism; • Diverse Validation: a five point validation plan that includes dynamic testing, hand validation, and static analysis by formal reasoning. The particular significance of this work lies in: • the use of an expressive *formal specification language* (Many-Sorted First Order Logic) to capture part of the requirements specification for a real application; • the construction and customisation of a formal requirements engineering environment (which is given the acronym “FREE” throughout the paper) that was used as a framework for the capture and validation of the model. As well as carrying out all forms of syntactic checks, and allowing reasoning about the behaviour of the model to be carried out, the FREE translates the requirements model into: • a “hand validation” form, for examination by domain experts who may be unfamiliar with formal logical notation; • a prototype, for use within – a test harness to perform dynamic testing of the requirements, and – a simulator to allow users access to an animated version of the requirements, allowing “hands-on” validation by domain experts. This approach addresses the two main problems mentioned at the start of the introduction. The software engineer’s idea of what the users or customers want is precisely captured in a validated, maintainable formal model. It may be that the simulator can be further developed to satisfy the full user requirement (for example it could be optimised to satisfy response-time requirements). In this case the problems to do with software development are reduced to those encountered in the development of the simulator, and in constructing, acquiring and customising the tools that make up the model’s environment. At the very least, this approach aims to deliver a well-validated formal specification with which to contract software developers, as well as a simulator with which to dynamically check final software. In the project described here, we were concerned with the specification of requirements for software which would implement rules used in oceanic air traffic control, as we will explain below. Throughout the paper, we use the term “application domain” (or “domain”) to mean that part of reality with which we are concerned, and the term “requirements model” (or “model”) to refer to the specification of the domain. While a complete specification of re- requirements for the software would include definitions of “non-functional” requirements, such as those relating to its performance or user interface, the aim of our project was simply to capture “functional” requirements relating to the implementation of rules for aircraft separation. The part of the requirements model (or requirements “specification”) we are concerned with here is thus equivalent to the model of the domain which we aim to capture. An Overview of the FAROAS project The formal requirements method was developed within the context of the FAROAS project, a research project funded by The Civil Aviation Authority. The general area of interest of the work was the separation of aircraft in oceanic airspace. Air traffic in the airspace over the north eastern part of the Atlantic – The “Shanwick Oceanic Control Area” (Shanwick OCA) – must be separated in accordance with minima laid down by the International Civil Aviation Organisation. The separation distance that is applicable in any given situation depends on a large number of factors including the type of an aircraft, its navigational accuracy and whether it is climbing or descending. A structured, natural language description of these separation standards is contained in the Manual of Air Traffic Services, Part 2 (MATS-2) [3]. It is the responsibility of air traffic control officers to ensure that all aircraft within Shanwick OCA are separated by at least the required minima through the processes of conflict prediction and conflict resolution. Conflict prediction is the process of detecting potential separation violations by comparing the projected flight paths (flight profiles) of pairs of aircraft. Conflict resolution is the process of planning new conflict free flight profiles. The air traffic control officers use an automated Flight Data Processing System (FDPS), a key component of which is the “conflict software”, that provides assistance for the processes of conflict prediction and resolution. A new FDPS is currently being developed and we became involved with the capture of the requirements for the conflict software in the new FDPS. The long term goal to which the work of the FAROAS project contributed was to develop a formal specification of the requirements for conflict in oceanic airspace. The aim was to formalise and make complete the requirements of the Shanwick OCA aircraft separation standards with respect to the specific function of predicting and explaining separation violations (i.e. conflicts) in aircraft flight plans, in such a way that those requirements can be rigorously validated and maintained. Ultimately the formalisation might serve as an independent standard for the procurement of conflict software systems. Hence the role of this document would be comparable to the MATS-2 in operational ATC. Within this context the objectives of the FAROAS project were: - to identify a formalism for requirements capture that could be validated by ATC experts - to formally capture the functional requirements of conflict prediction - to establish a method for validating (and re-validating when necessary) the formally captured requirements. The project commenced in May 1992 and was based at The City University in London. For the duration of the project, the project team consisted of two full time research fellows, a project manager, a project consultant, a quality manager and a project manager from The Civil Aviation Authority. Also, a research fellow worked for 3 months on the use of automated proof assistants for maintaining the requirements specification. From the outset, project plans detailed the major deliverables and milestones against which the progress of the research was to be judged. Also implemented from the start of the project was a detailed quality plan which set down procedures for formal review of all project deliverables (reports, specifications and software). The project terminated successfully at the end of 1993 upon delivery of the formally captured requirements model which had been validated to the satisfaction of the client and sufficiently to demonstrate the method. The Scope and Nature of the Domain Criteria used in the domain analysis The success of requirements capture depends greatly on establishing a clear scope for the project, and on a good choice of formalism. The scope of our application, which we will refer to as the oceanic ATC domain, was confined to conflict prediction, although we wished to keep open the possibility of adding a conflict resolution component at a future stage. Once the scope of a project has been established, the domain should be analysed to identify - the key sources of domain knowledge - the groupings and the nature of various types of knowledge - the size of the model to be constructed, and the aspects (e.g. time, agents) that need to be represented explicitly In formal requirements capture, we have to deal with the vague concepts of “knowledge” and “knowledge representation”. Since the field of knowledge representation has not yet arrived at generally agreed standard criteria or measures, the points above will instead be illustrated by example. The key sources of knowledge The key sources of knowledge will affect the planning of domain capture, for example if the major source is experts rather than documents, the setting up of interviews and document reviews will tend to lengthen the project and increase its cost. These key sources also play a crucial role in validation, and so are as important towards the end of a project as they are during the elicitation phase. For example, comparing the behaviour of a prototype that has been automatically generated from a requirements specification against the customer’s existing software, is far easier than assembling groups of experts together to “hand validate” a set of formalised rules. In the oceanic ATC domain the main knowledge source was: - documentation – principally the separation standards encapsulated in the MATS-2 [3]. Additional sources were the design documentation of the current conflict prediction software [4, 5] Additional knowledge sources were: - people – principally air traffic control officers - software – the current conflict prediction software The groupings and the nature of various types of knowledge Unless there is some strong reason to the contrary, such as the need for a new technical solution, then the captured model should reflect the existing groupings of knowledge. Each grouping should be analysed, to determine the types of knowledge it contains and its likely interface with other groupings. The oceanic ATC domain contains two broad types of knowledge: - rule-based: This includes a set of rules in natural language within the MATS-2 document that detail the minimum separation that must be maintained between aircraft in oceanic airspace. Also, there are rules that define a method of determining if two flight profiles will violate the minimum separation values that have been derived from the separation rules, in other words to detect potential conflicts. A simple rule defining the scope of a segment (a section of a flight profile) is paraphrased below: “If a profile containing a segment is wholly or partly in Shanwick OCA then a segment of such a profile starts at or after the first recognised point for oceanic conflict prediction if and only if the entry time of that segment is at or later than the time of the first recognised 4D point for oceanic conflict prediction of the profile containing that segment.” - object-based: the objects, relationships and properties that underlie the rules. For example, “segment” is an important kind of object used in many of the rules. The rule above contains the following functional relationships: “the entry time of that segment” “the profile containing that segment” The size of the model to be constructed It is important to try to quantify the likely size of the model so that the project can be planned accordingly. This presumably extends work carried out during the feasibility study. One crude measure of model size is simply the number of axioms or rules that are predicted to appear in it. For the FAROAS project, model size was estimated by considering the: - textual size of the main knowledge sources, in terms of relevant assertions about the oceanic ATC domain; and the - number and complexity of discernible types of objects in the domain. In choosing a metric for size we had to assume a particular formalism (possible metrics include number of equations, operations, rules or axioms predicted to be in the model). Using these observations, and choosing a “number of axioms” metric, we examined the available documentation and correctly estimated the size as being in the region of several hundred axioms. The nature of the explicit knowledge Another important question regarding the model is: which aspects of the domain will require explicit representation? Some general aspects that might be considered here include: - agents - time - states of knowledge and belief - actions - probabilities • permissions and obligations To some extent these aspects are independent of each other: some formalisms explicitly represent agents and belief, but none of the other aspects listed above, some explicitly represent actions, but none of the others and so on. Once it has been decided that a particular aspect needs to be represented explicitly, further questions arise as to how this should be done. For example, in the case of time, the model of time adopted may be either discrete or continuous; it may be bounded or unbounded (in either past or future directions); it may involve time points or time intervals (or both); and from a syntactic point of view, it may involve the use of modal operators, or alternatively the use of terms denoting times. The real domain of air traffic control clearly involves time and agents (e.g. pilots and air traffic controllers), who have beliefs and who perform actions (e.g. issuing clearances). However, since the focus of the FAROAS project was the rules governing oceanic aircraft separation and conflict prediction, we concluded that of the general aspects listed earlier, only time need be represented explicitly. The chosen model of time was discrete, and unbounded in the future. It was felt that times should be represented syntactically by explicit temporal terms, relative to a nominal zero time. The term “22 15 GMT day 2” is a typical example of such terms: the day number here is relative to an arbitrary “day 0”, since there is no need to refer to the actual calendar date as far as the separation rules are concerned. Formalism Choice and Domain Capture Criteria for Formalism Choice Once the scope, size and nature of the domain have been determined, the most important initial aspect of formal requirements capture is the choice of formalism. To some extent this will depend upon the experience and prejudices of the modellers, but there are also more objective criteria. In a similar study to this one (described briefly in [6]), an Airborne Collision Avoidance System (ACAS) was captured using the LOTOS specification language [7]. Describing the capture, Sowerbutts states that the main reason for the choice of LOTOS is the “natural mapping from ACAS onto LOTOS’s processes”. In other words LOTOS was chosen on the basis of how well it fitted the domain. Another factor was the availability of tool support for LOTOS (in particular an interpreter). Considerations such as these lead us to the following criteria for evaluating candidate formalisms, the first being the most important: • natural fit : does the formalism fit the domain? Does it allow domain knowledge to be represented at an appropriate level of abstraction, so that the model can mirror the domain? This question is analogous to consideration of the “semantic gap” in programing language selection: the narrower the gap between language features and application, the more natural the selection of that language is said to be. The chief advantage of a natural fit is that it eases knowledge elicitation and validation. For example, a model that mirrors the domain can be directly viewed by domain experts (as was intended in the FAROAS project) and facilitates the construction of a tool for translating parts of the specification into a **validation form** that can be easily read by the domain experts. - **support environment** : are practical tools available for the formalism? The kinds of tools required for formal requirements capture are type checkers, parsers, translators, interpreters, proof checkers and inferencing mechanisms. These tools should be available in an integrated environment that can be customised for a particular project. - **maintenance** : if the domain is subject to change, can the model be easily and consistently updated to reflect this? This question may in fact be combined with the one above - for example, are tools available to easily re-generate proof obligations to ensure model consistency? - **expressiveness and extendibility** : will the formalism restrict natural expression in any way? If the scope or depth of requirements of the domain are increased, can the formalism be likewise extended? - **formality** : does the language have a firm mathematical basis, where its meaning is clearly independent of and not tied to a program or interpreter? Is it possible to reason with the formalism in a precise and straightforward way, ideally with a tool such as a **proof assistant**? - **experience and training** : are project staff initially familiar with the formalism, or will they require a period of learning or even formal training before they can use it effectively? If the latter is the case, considerable delay and additional cost may result. The same applies to future staff, who may have to maintain the system after the initial project team have left. A formalism that is uncommon or difficult to learn will thus introduce extra delays and costs throughout the whole life-cycle of the system. Connecting all these criteria is the dominant issue of **validation**. The requirements model will have an interpretation that makes it a “model” of something real, making it analogous to an inductive scientific theory. Like a scientific theory, it cannot be formally proved absolutely correct or complete. However, its quality can be promoted by systematic validation, relying on diverse and largely automated validation processes. The link may be indirect, but a formalism that helps the validation process is also desirable. **Evaluation of Candidate Language Groups** At the start of the project we performed a survey and feasibility study into the use of various likely languages groupings [8], and evaluated them according to the criteria above. Here we will summarise that evaluation, taking each group in term. Firstly, there are those languages traditionally used for knowledge representation in the areas of artificial intelligence and knowledged-based systems. These include rich, highly expressive languages such as frame-based representations [9], input languages for expert system shells [10] and variants of formal logic [11]. As early as the mid-1970’s frame-based languages, which inspired the 1980’s boom in object-oriented languages and methods, allowed one to capture knowledge as a collection of related objects, each object having internal structure comprising slots (or attributes) and procedural attachments [9]. This slant towards rich, very high level languages seems to be at the expense of semantic rigour. Frame-like representations and expert system shell languages can be criticised for having a meaning which lacks a formal mathematical basis, and is too tied to an interpreter or an implementation. Hence, whereas the machine-independent languages of logic are possible candidates, in general the languages in this group do not score well on the *formality* criterion. We might also have considered using a specialised requirements specification language, such as RML [12] or MAL [2]. While languages such as these are being developed specifically for use in capturing and modelling the requirements for certain kinds of system, they were judged not to be appropriate for use in our project. Languages such as MAL are more expressive and more complex than was judged necessary: for example, although Structured MAL provides the engineer with the ability to specify agents and obligations in the domain of interest, we had already decided that the FAROAS project would not need to be concerned with the explicit modelling of such phenomena. Other languages, such as RML, have been targeted at use in developing information systems and therefore embody concerns different to those involved in the development of technical decision support systems. Still other requirements specification languages, such as that under investigation in the KAOS project [13] were judged to be at too early a stage in development to be used in specifying a real application. Support tools for such languages were also not readily available. Other potential candidates were the established formal specification languages, which fall into two broad families. The first are those languages based on *equational algebra*, for example OBJ3 [14], AXIS [15], and LOTOS [7]. A specification written in OBJ3 is typically formed in a hierarchical structure of algebraic specifications of abstract data types. Specifications thus have an abstract, object-oriented flavour, supporting polymorphism and encapsulation, and their equational basis allows specifications to be prototyped using a re-write rule interpretation [16]. While it would be possible to build up definitions of the objects in the ATC domain in this way, the bulk of the domain (requiring a rich logical form) could not be represented naturally using equational expressions. In other words, the semantic gap between the rule based ATC knowledge and an equational-based specification language was too wide. The *model-based* formal specification languages (chiefly VDM-SL [17] and Z [18]) are based on first order logic and set theory, and have the advantages of a growing user-base and tool support including parsers and type checkers (e.g. *juzz* for Z [19]). Specifications written in VDM-SL typically contain a mathematical model of a state involving composites of sets, sequences and mappings, as well as a collection of operations that specify state changes using pre- and post-conditions. We initially used a model-based notation to represent some of the objects in the *oceanic ATC domain* and some of the functions on those objects. In an early project report on the domain analysis, flight profiles were represented as the following set [20] (the reader not familiar with this notation may safely ignore the details): \[ \text{Flight\_profiles} = \{(a, f) : a \in \text{Aircraft} \land \\ f \in \text{seq}[\text{Flight\_positions} \times \text{Times}) \times \text{Aircraft\_speeds}] \land \\ (((p_1, t_1), s_1), (p_2, t_2), s_2) \in \text{adjacent}(f) \Rightarrow t_1 < t_2) \land \\ (\forall s \in \text{ran(ran}(f)), s \leq \text{max\_speed}(\text{type}(a))) \land \\ (\forall s \in \text{ran(dom(dom(ran}(f)))), s \leq \text{flight\_ceiling}(\text{type}(a))) \land \\ \text{length}(f) \leq 2\} \] Our idea of an adequate representation for flight profiles incrementally shifted as our understanding deepened, however, so that effort creating (and typesetting) this initial definition had been largely wasted. We soon realised that at this early stage any commitment to a model of the objects in the domain was premature. Rather, we needed to construct the requirements model using a loose specification, one that allows us to make the least commitment to the structure and behaviour of the model (as explained on page 19 of reference [21]). When capturing requirements one does not have a deep enough knowledge of the domain to commit to a particular representation using abstract mathematical building blocks typified by the set. If one creates an inappropriate partial model in this form then throwing away the initial model and creating a new one wastes effort. In addition, our initial domain analysis reports containing set-based formulae, such as that shown above, were off putting to our client. One final point against the use of a model-based formalism for this project is to do with their promotion of specifications using an abstract state. Our application could not easily be given an interpretation that involved operations on a state, as the bulk of the data represented (monotonic) knowledge which would be used to come to a binary decision about aircraft separation. Hence, we would be unlikely to put these notations to full use. These deliberations lead us to concentrate on more abstract languages based wholly on formal logic. A result of the domain analysis was that we did not need to represent uncertainties, beliefs, actions etc, which indicated that a straightforward first-order logic would be adequate, easing the problems of staff training and tool support. Also, the major part of the knowledge we were to capture was written used a logical phrasing (as the example paragraph on page 5 suggests) which could be captured at a natural level of abstraction by first order logic. To deal with objects in the application, classical logic can be enriched with sorts [22] defining classes of objects which share the same properties. Encapsulating primitive axioms in a sort definition also gives a natural structure to the specification, as explained in the next section. **The Domain Capture Formalism** Having decided on the type of formal language, a final decision was required between using an “off the shelf” formalism and customising our own. In the event we chose a formalism in the latter category, a customised version of Many-Sorted First Order Logic [22] which we refer to in the remainder of the paper as MSFOL. A strong candidate in the former category appeared to be Z [18], an alternative we will discuss in retrospect after an exposition of the use of MSFOL. We defined a version of MSFOL to have a simple structure of disjoint sorts, with rigidly sort-restricted functions and predicates — with the sole exception of numerical sorts, predicates and functions. In this case, sorts were allowed so that numerical operators could be overloaded in the usual way. For example, the symbol “<” could be used to compare two terms that were both of type natural numbers, integers or reals. Atomic wffs were composed of mix-fix predicates and functions, allowing expressions to be written with maximum readability, for example: \[(\text{Segment starts\_at\_or\_after\_first\_recognised\_pt\_for\_oceanic\_cpr})\] is an atomic wff, where Segment is a sort variable followed by a long but descriptive predicate name. The syntax was defined using a “determinate clause grammar”, expressed in the Prolog grammar rule notation [23]. Such a grammar has a dual interpretation as a specification and a program, and hence doubles as a parser. This formed the “front-end” to the translation tools which were the central processes in the FREE, the requirements engineering environment which we constructed during the course of the project. The Structure of the Conflict Prediction Specification The model we constructed captured the functional requirements of the conflict prediction process within the oceanic ATC domain, and so in what follows we shall call it the “Conflict Prediction Specification” (the CPS). It should be clear from this section that parts of the model however, in particular the separation rules, can be re-used for other applications such as a specification of conflict resolution. Many of the axioms in the CPS were non-recursive definitions of predicates or functions in terms of lower-level predicates and functions. For example, the rule stated in English on page 5 was represented by the following axiom: \[\begin{align*} (\text{the\_Profile\_containing}(\text{Segment}) & \text{ is\_wholly\_or\_partly\_in\_shanwick\_oca}) \\ \Rightarrow [ & (\text{Segment starts\_at\_or\_after\_first\_recognised\_pt\_for\_oceanic\_cpr}) \\ & \leq > \\ & (\text{the\_entry\_Time\_of}(\text{Segment}) \text{ is\_at\_or\_later\_than} \\ & \text{the\_Time\_of}(\text{the\_first\_recognised\_AD\_pt\_for\_oceanic\_cpr\_of}( \\ & \text{the\_Profile\_containing}(\text{Segment}))) ] \end{align*}\] This axiom amounts to a conditional definition (applicable only to segments belonging to profiles that are wholly or partly in the Shanwick OCA) of the predicate: \[(\text{Segment starts\_at\_or\_after\_first\_recognised\_pt\_for\_oceanic\_cpr})\] in terms of the functions: \[ \text{the\_Profile\_containing}(\text{Segment}) \] \[ \text{the\_entry\_Time\_of}(\text{Segment}) \] \[ \text{the\_Time\_of}(\text{4D\_pt}) \] \[ \text{the\_first\_recognised\_4D\_pt\_for\_oceanic\_crp\_of}(\text{Profile}) \] and the predicates: \[ (\text{Profile\ is\ wholly\ or\ partly\ in\ shanwick\ ooa}) \] \[ (\text{Time1\ is\ at\ or\ later\ than\ Time2}) \] The structure of the specification reflected the hierarchical structure of the conflict prediction domain, shown in figure 1. At the top-level are axioms specifically capturing the conflict prediction method, which involves pairwise comparisons of segments. For example, a recursive axiom (which we will refer to as the “box conflict axiom”) describing the conditions under which conflict is said to exist within a time interval modelled as a set of discrete points, is as follows: \[ [(\text{Segment1\ and\ Segment2\ are\ subject\ to\ oceanic\ crp}) & (\text{Time1\ is\ in\ overlap\ time\ window\ for\ Segment1\ and\ Segment2}) & (\text{Time2\ is\ in\ overlap\ time\ window\ for\ Segment1\ and\ Segment2}) & (\text{Time2\ is\ at\ or\ later\ than\ Time1})] \text{=>} \[(\text{box\ conflict\ exists\ between\ linear\ tracks\ of\ Segment1\ and\ Segment2\ at\ some\ time\ at\ or\ between\ Time1\ and\ Time2}) \text{<==>} \[[(\text{box\ conflict\ exists\ between\ linear\ tracks\ of\ Segment1\ and\ Segment2\ at\ Time1}) \text{or} (\text{box\ conflict\ exists\ between\ linear\ tracks\ of\ Segment1\ and\ Segment2\ at\ some\ time\ at\ or\ between\ the\ next\ integer\ Time\ in\ mins\ after(\text{Time1})\ and\ Time2}) ] \] ]. The separation values for segments of a profile are captured by the “Separation Value Axioms”. An example separation rule from the specification is as follows: Figure 1: The Structure of the Specification \[ \begin{align*} & [(\text{Segment1} \text{ and } \text{Segment2 are}\_\text{subject}\_\text{to}\_\text{oceanic}\_\text{crp}) \\ & \quad \land \quad (\text{Flight}\_\text{level1} \text{ lies}\_\text{in}\_\text{flight}\_\text{level}\_\text{range}\_\text{of} \ \text{Segment1}) \\ & \quad \land \quad (\text{Flight}\_\text{level2} \text{ lies}\_\text{in}\_\text{flight}\_\text{level}\_\text{range}\_\text{of} \ \text{Segment2})] \\ & \Rightarrow \\ & [(\text{the}\_\text{min}\_\text{vertical}\_\text{sep}\_\text{Val}\_\text{in}\_\text{feet}\_\text{required}\_\text{for} \\ & \quad \text{Flight}\_\text{level1} \text{ of} \ \text{Segment1} \ \text{and} \ \text{Flight}\_\text{level2} \text{ of} \ \text{Segment2}) = 1000 \\ & \quad \Rightarrow \\ & [(\text{both} \ \text{Segment1} \ \text{and} \ \text{Segment2 are}\_\text{flown}\_\text{at}\_\text{subsonic}\_\text{speed}) \\ & \quad \land \quad (\text{both}\text{Flight}\_\text{level1} \ \text{and} \ \text{Flight}\_\text{level2} \ \text{are}\_\text{at}\_\text{or}\_\text{below} \ \text{FL} \ 290)]] \end{align*} \] This captures a rule which says that in certain situations there has to be a vertical separation of 1000 feet between aircraft. Again, note the use of mix-fix, readable predicates, contributing to the overall transparency of the model. Below the Separation Value Axioms in the hierarchy lie a larger group of Auxiliary Axioms, defining various auxiliary predicates and functions used in the Separation Value Axioms. The higher levels of the specification are anchored by the “Domain Object Axioms”, which constrain the meaning of the primitives associated with each sort. Sorts were textually encapsulated in definition modules, where the signature and axiomatic definition of operations (predicates and functions) of that sort resided. This gives the requirements model the object-centred flavour one expects to find in an algebraic specification, although the use of an object inheritance technique was not required. For example, an extract of the sort Segment is given in figure 2. This was the largest sort definition having 60 functions, 20 predicates and 50 axioms associated with it. Sortname: Segments Function names: the__Segment(Profile, AD_pt1, AD_pt2, Val) the__Profile_containing(Segment) the__entry_AD_pt_of(Segment) the__exit_AD_pt_of(Segment) the__machno_Val_on(Segment) the__cruise_climb_status_Val_of(Segment) ... Predicate names: Segment1 = Segment2 Segment1 \ = Segment2 (Intere_0 is a min_long_sep_value for Segment1 and Segment2 entered via the_mst_command) (time_periods_of Segment1 and Segment2 overlap) (flight_level_ranges_of Segment1 and Segment2 overlap) (Flight_level lies_in_flight_level_range_of Segment) ... Axioms: Segment1 = Segment2 <=> [the__entry_AD_pt_of(Segment1) = the__entry_AD_pt_of(Segment2) & the__exit_AD_pt_of(Segment1) = the__exit_AD_pt_of(Segment2) & the__machno_Val_on(Segment1) = the__machno_Val_on(Segment2) & the__Profile_containing(Segment1) = the__Profile_containing(Segment2)] ... Figure 2: Part of the Sort “Segment” It is possible to follow down chains of definitional axioms until one reaches primitive predicates and functions that require factual profile data (i.e. sort instances) to be evaluated. The specification itself does not include instances of sorts, but for the purposes of animation, the CPS was supplemented with particular details of an oceanic airspace, containing persistent information regarding sort instances (aircraft makes, airfield positions and so on). Finally, profiles themselves need to represented as sort instances to allow evaluation of the conflict prediction function. **An Alternative Formalism** As we considered the specification language Z to be the most serious rival to the choice of MSFOL, in this section we will briefly compare the two for this application. Z has been proposed for use in requirements capture [24] and although classed as a *model-based* formal specification language, it can just as easily be used to capture domain knowledge in a purely axiomatic way, by naming types and placing logical axioms in schemas around these types. This would result in the kind of loose specification that we argued for above. We illustrate the difference between Z and the customised MSFOL by comparing the MSFOL encoding of the box conflict axiom shown on page 12 with the Z encoding shown in figure 3. Although the Z schema contains signatures of the relations as well as the axiom itself, inspection of the two encodings shows up little difference, except that the MSFOL version is arguably more readable and less “mathematical”. Readability was a key concern as it was important that the CPS was in a form (or easily translated to a form) that could be read by air traffic control experts for the purposes of validation. We also felt that the readability of the CPS would be increased if type information was separated from the main body of axioms and held as part of the grammar/lexical rule definition (although type information was suggested by the use of appropriate namings in the MSFOL). Apart from readability, the reasons for selecting MSFOL over Z for capturing the functional requirements of the *oceanic ATC domain* are: - **MSFOL** could be represented entirely with standard ASCII characters, so that files containing axioms did not require the use of special fonts such as those needed for Z. This makes editing and processing of axiom files more straightforward, and it enhances the portability of axiom files between different machines and software applications. - in the *oceanic ATC domain*, no changes of state occur and so it was unlikely we would need to refine our specification to include a state model. Thus we would not need to call on Z’s huge collection of set-based notation (as detailed in [25]). - while tool support for MSFOL was not as readily available as in the case of Z, the use of a “mainstream” logic meant that tools were easy to construct or to import. Creating our own tools environment meant that we could easily interface to and extend it, an important factor in such an exploratory project. "SEGMENT, TIME" areSubjectToOceanicCpr : SEGMENT ↔ SEGMENT isInOverlapTimeWindowFor : TIME ↔ (SEGMENT × SEGMENT) isAtOrLaterThan : TIME ↔ TIME haveBoxConflictAt : (SEGMENT × SEGMENT) ↔ TIME theNextIntegerTimeInMinsAfter : TIME → TIME haveBoxConflictAtSomeTimeAtOrBetween : (SEGMENT × SEGMENT) ↔ (TIME × TIME) ∀ Segment1, Segment2 : SEGMENT; Time1, Time2 : TIME (Segment1 and Segment2 ∈ areSubjectToOceanicCpr ∧ Time1 isInOverlapTimeWindowFor Segment1 and Segment2 ∧ Time2 isInOverlapTimeWindowFor Segment1 and Segment2 ∧ Time2 isAtOrLaterThan Time1) ⊑ Segment1 and Segment2 haveBoxConflictAtSomeTimeAtOrBetween Time1 and Time2 ⇔ (Segment1 and Segment2 haveBoxConflictAt Time1 ∨ Segment1 and Segment2 haveBoxConflictAtSomeTimeAtOrBetween theNextIntegerTimeInMinsAfter (Time1 and Time2)) Figure 3: Example Z Rule • it is a little awkward to express general relational predicates in Z. Firstly, n-ary relations, where \( n > 2 \), have to be split into pairwise relations, as in the example in figure 3. Secondly, predicate names naturally expressed in mix-fix form have to be expressed using one contiguous identifier. The Validation Process Validation of Formal Requirements Models Many problems are associated with requirements validation, especially when knowledge has to be elicited from domain experts. In fact one could contend that a correct and complete model of the domain could not exist because there is rarely full agreement among experts, and their understanding of the domain tends to change over time. The optimal solution seems to be in promoting the “fit” between model and domain in various ways, whilst allowing efficient means of model maintenance. We identify two important features of the validation process that support this: • Diversity: errors occurring in a model can be syntactic or semantic, and may be of omission or commission. Different kinds of validation may unmask errors of different kinds: hence a range of validation processes is advisable. • Automation: a major factor in the design of the validation process is allowing for the process to be repeated many times. This repetition will initially be frequent, although even after the acceptance of the requirements model, re-validation of an updated model is essential. Hence there is an overwhelming need to automate as many parts of the process as possible. A Framework for Validation We outline five separate ways in which a formal requirements model could be validated; their relation to the requirements model is as shown in figure 4, referenced by process numbering. In both figure 4 and 5, boxes represent documents or datastores and ovals represent processes or processors. In this context, we use the word “validation” in the widest sense, to include the removal of any class of error from the model. • **by syntactic checking (process 1):** Under this heading we group the removal of syntactic errors such as spelling mistakes, as well as illegal use of logical operators and type errors in predicate and function arguments. This check will be performed automatically by a parsing tool, and will form the front end to the FREE as shown in figure 4. Figure 4: A Formal Requirements Engineering Environment (FREE) • **by dynamic testing (process 2):** Being able to generate a prototype automatically has several advantages, principally that captured requirements can be immediately tested, without the need for any software development work. The ease and degree of automation involved in its production will depend upon the application. Historic data can be extracted from the application domain and systematic dynamic testing can be performed in a similar manner to program testing, using a test harness. • **by hand (process 3):** We define hand validation as the use of domain experts to read through and comment on the validity of (a presentation of) the model. This is arguably the most time consuming and unpredictable form of validation, but helpful texts exist relating to the conduct of such interviews and meetings [26]. The FREE should output an easily readable form of the domain capture formalism, substituting mathematical symbols with a natural language translation, and producing diagrams describing the model's structure. • **by formal reasoning (process 4):** Requirements specifications can be used and re-used for different applications and objectives, rather than just being used as a specification of a particular program (and in a way dynamic testing only tests one particular *behaviour*). We require, therefore, a way of reasoning with the model to investigate its general behaviour and logical consequences. Ideally the FREE would incorporate a proof assistant or theorem prover so that an engineer could formulate desirable properties of the model and set them up as theorems to be proved. The proof process often uncovers errors whereas a completed proof heightens confidence in the model. A fully automated route from the requirements model to the proof of model properties would mean easy re-execution of these proofs after model updates. • **by simulator (process 5):** Testing of a more user-oriented kind may be performed with a simulator. This should be integrated with the automatically generated prototype via a custom-built interface, allowing the users to test the model themselves. If the simulator is constructed in such a way that the user can ask for explanations of the behaviour of the model, it can be used in conjunction with hand validation sessions (as in process 3 above). No one form of validation should be used to convince us that a model is valid. For example, dynamic testing of requirements models gives a similar scenario to that of program testing - only showing the presence of errors but not their absence. The whole validation process should be systematic, and execution of its sub-processes sensibly ordered, with syntactic parsing of the specification preceding any other sub-process. In both hand validation and testing, the scope of validation should be recorded, as these processes may be iterated many times. In summary, a systematic approach should be imported into requirements testing from the conventional field of Software Testing. Validation in the FAROAS Project Our initial capture of the oceanic ATC domain model led to a document of about three hundred axioms, structured in a hierarchical form. Knowledge was acquired chiefly from documents, although several interview sessions were arranged with air-traffic control staff to elicit background knowledge. Once the model size stabilised, we set about tackling the validation stage. Below we describe the validation processes that we used. The engineering environment that we in effect created is shown in figure 5. The FREE was implemented in Quintus Prolog\textsuperscript{1} on a Sun workstation, using the Unix operating system. All key files were held under the Source Code Control System, a standard Unix configuration management tool, which provides facilities such as file protection, automated version control and logging of alterations. Process 1: Syntax Checking As a pre-condition for any other validation process, the whole of the CPS must parse successfully, thereby showing that its syntax and its defining grammar are mutually consistent. In effect this use of the parser is similar to tools such as fuzz for Z [19]. The grammar that defines the syntax of the domain capture formalism has a level that applies to first order logic generally, and a customised level, which, for example, allows us to control the actual names of variables for each sort within axioms. Hence the content as well as the form of sentences in the formalism is strictly controlled, and any non-conformance will result in failure of the parse. Errors identified in this process may not only arise from oversights in the specification; it may be decided that the grammar itself is inadequate. The grammar was validated by the client through visual inspection of its definition and its use in the documents describing the CPS. As can be imagined, over the course of the project, syntax checking uncovered errors too numerous to count! Process 2: Dynamic Testing The most complex part of the parsing and translation process is the tool which produces an executable form of the CPS (the production of a Prolog prototype as shown in figure 5). It must be emphasised that the decision on what execution form to use was not made until after the initial requirements capture. If the execution form is known before the construction of the specification, then it can have an undue influence on the representation of the domain, possibly compromising its clarity and natural structure. Inspection of the CPS showed that the logic could be transformed to Horn clauses, and \footnote{\textsuperscript{1}Copyright ©1991 Quintus Corporation} Figure 5: The FREE for the oceanic ATC domain hence it was quite feasible to automate the process of translation to an executable Prolog prototype. The translation procedure (which takes about 1 minute to execute on our chosen architecture) was built so that each time the specification is updated and successfully parsed, the output parse-tree from the parser feeds into the translator which automatically creates the Prolog prototype (see figure 5). For example, the box conflict axiom referred to on page 12 is automatically translated to the following Prolog clause: ```prolog box_conflict_exists_between_linear_tracks_of_at_some_time_at_or_between(Segment1,Segment2,Time1,Time2):- are_subject_to_oceanic_cpr(Segment1,Segment2), is_in_overlap_time_window_for(Time1,Segment1,Segment2), is_in_overlap_time_window_for(Time2,Segment1,Segment2), Time2 is at_or_later_than Time1, (box_conflict_exists_between_linear_tracks_of_at(Segment1,Segment2,Time1) ; the_next_integer_Time_in_mins_after(Time1,Time3), box_conflict_exists_between_linear_tracks_of_at_some_time_at_or_between(Segment1,Segment2,Time3,Time2)), !. ``` The prototype was used for dynamic testing with an historical test set of client supplied conflict scenarios that tested top-level conflict prediction tasks, and a set of “in-house” generated tests which were designed to systematically test lower level and auxiliary predicates (numbering about 400 tests in total). Insecurities in Prolog to do with types were dealt with by ensuring that any use of the prototype was channelled through the FREE. Thus test data is input in the MSFOL language, and tools parse it, translate it into Prolog queries and then input it into a test harness which runs the prototype. After execution of all the tests the output contains the queries in a validation form together with the expected and actual results. The validation feedback loop shown in figure 5 was invoked for 5 tests which gave incorrect results, and this process eventually led to the uncovering of 3 errors which were present in the CPS. Significantly, two of these errors to do with the boundaries of aircraft vertical separation had been initially missed at hand validation meetings, emphasising the need for multiple forms of requirements validation. The tests were run repeatedly until a 100% success rate was achieved on both the client supplied and in-house generated test sets. **Process 3: Hand Validation** A validation form was required so that the specification could be presented to air traffic control experts. As the domain capture formalism was already quite readable the validation form was obtained simply by replacing logical symbols with their natural language translation, and improving the layout and presentation of the axioms. Sentences in validation form were automatically output from the parsing and translation tools, as illustrated by the resultant form of the box conflict axiom below: FOR ANY Time1, Segment1, Segment2 and Time2 IF Segment1 and Segment2 are subject to oceanic conflict prediction and resolution AND Time1 is in the overlap time window for Segment1 and Segment2 AND Time2 is in the overlap time window for Segment1 and Segment2 AND Time2 is at or later than Time1 THEN box conflict exists between the linear tracks of Segment1 and Segment2 at some time at or between Time1 and Time2 IF AND ONLY IF EITHER box conflict exists between the linear tracks of Segment1 and Segment2 at Time1 OR box conflict exists between the linear tracks of Segment1 and Segment2 at some time at or between the next integer Time in mins after Time1 and Time2 Hand validation meetings were arranged, initially to check the scope of the specification, and later to validate individual axioms. Tree diagrams were used to show the hierarchical interconnection of axioms, allowing validators to “navigate” through the model. As the domain capture formalism allows the structure of the oceanic ATC domain to be preserved, air traffic control experts found both it and its validation form understandable, stimulating debate and allowing them to easily uncover errors in our initial understanding of the domain. With so many axioms, however, hand validation was still a long and painstaking process, and there was time at each meeting to study only a part of the whole specification. During the course of the FAROAS project 4 validation meetings were held, each lasting 2-3 days and involving 4-5 personnel. For each meeting the number of errors and omissions found in the specification, ranging from the trivial to the serious, was in the range 10 – 25. Process 4: Formal Reasoning During the project we performed a proof of the overall consistency of the CPS without the use of “intelligent” computer-based support tools. The proof strategy used was to view the requirements model as a theory, and construct a particular interpretation for it which satisfied each of its axioms. As a preliminary to the proof, the set of axioms was reduced by sifting out all those that are definitional. These axioms can be regarded as expressing an extension to the language of the theory, in effect introducing a convenient “abbreviation” for more complex formulae involving lower-level predicates and functions. Approximately 110 axioms were unique, unconditional extensions of this nature, and were removed so that we could concentrate on proving the consistency of the reduced set of axioms. Then an interpretation function for the reduced set of axioms was constructed, and we used it to show that at least one set of objects existed for which the axioms are true. The main effort involved here was in producing an argument that separate parts of multiple conditional definitions for predicates or functions were mutually exclusive. Proving this type of consistency draws attention to the overall structure of the specification, and, in the event, one error was removed from the specification during the proof process. On the other hand, generating hand proofs is a slow and potentially error prone process for specifications of this size, and a feasibility study to incorporate automated support as indicated in our idealised FREE in figure 4 was carried out in the project (and is discussed later under “Future Work”). Process 5: The Interface and Simulator An interface for the FREE was produced using Quintus Pro-Windows², giving a consistent “look and feel” to the environment. This allows the CPS and the grammar defining its syntax to be securely maintained, and each time the specification is parsed successfully, a fresh executable prototype and validation form is generated. Any changes made to the CPS can thus be dynamically tested, and viewed in a validation form, in a matter of minutes. The interface also provides the front-end to the simulator which consists of a windowing system that allows air traffic control experts to: - input flight profiles - run the conflict prediction function - request explanations of conflict decisions In the event of a detected conflict between two flight profiles the simulator can if required identify the segments which were in conflict, and indicate the required separation values (vertical, lateral and longitudinal) that were violated. The simulator can thus help air traffic controllers to validate that conflict decisions made by the prototype are made on the same basis as their own. Results of the Validation Process A slightly surprising result was that dynamic testing and the formal consistency proof un- covered relatively few errors. This may have meant they were not very effective, or that the ²Copyright ©1991 Quintus Corporation model was already a good fit for the domain. One argument to support the latter reason was that the syntax checking and hand validation processes were started well in advance of the other processes, with only one hand validation meeting occurring after the first round of dynamic testing. Thus many errors both of omission and commission had already been removed. On the other hand, our client supplied dynamic test sets were not exhaustive, and our consistency proof was only one aspect of what could be termed formal validation (and we return to these issues in the section on Future Work). The criterion for adequate validation of the CPS (within the boundaries of the project) was agreed in an official test plan, and entailed a sequential error-free execution of processes 1, 2, and 4, after the errors uncovered at a final hand validation meeting had been removed. The scope of the hand validation process covered the Conflict Prediction Axioms and the Separation Value Axioms, while the dynamic testing was limited to the client supplied and “in-house” test set as mentioned above. At the end of the Project this criterion was met successfully. Although the simulator (process 5) was not available until the end of the project, it has already shown a potential for use in the maintenance of the model: during the last hand validation meeting it proved possible to remove errors from the CPS, generate a new prototype and use the simulator to test the corrections. Finally, one often quoted point against formal specification is that a mathematical notation can be off putting to the customer. Our use of a readable formal specification language naturally capturing the current solution to this domain has contributed in no small way to the success of the validation process. Conclusions We have given an overview, using a real application, of a method for formal requirements capture and validation. In summary, this encompasses domain analysis, formalism choice, the development and customisation of a FREE, domain capture and validation. This method addresses the problems of requirements capture through diverse, automated and systematic validation. The benefits of such a method include the user obtaining a validated, maintainable model of the domain which can be used as a prototype running system, for exploratory work on new requirements and for comparisons with any derived (or existing) implementations. If the prototype embedded in the simulator completely satisfies the users needs then software production at the sub-specification level has effectively been minimised to the production of tools such as translators and test-harness environments. On the other hand, the requirements model may be used as a sound specification from which software developers could generate efficient software that satisfies non-functional constraints, including interfacing and efficiency considerations, or the need to construct a running system within a certified programming language such as ADA. Generalisation of the Method Our method has been used for a particular industrial domain that is sufficiently important to require study and formal capture. The domain contained chiefly rule-based and object-centred types of technical, expert knowledge and a good deal of the knowledge could be extracted from documentation. These are the characteristics that seemed to make our approach appropriate to the *oceanic ATC domain*. Using the method on similar but larger domains, resulting in larger axioms sets, would not seem to present a major problem. Our current CPS has a clear hierarchical structure (shown in figure 1) that could be combined with further structures, such as an axiom set capturing methods of conflict resolution of aircraft profiles (as opposed to conflict prediction). Much more of a problem would lie in using the method on domains requiring a deeper representation, to cover probabilistic, modal or deontic information. In this case we feel problems to do with validation would be exacerbated, as the notation would be far less accessible to experts, and the generation of a prototype a lot less straightforward. Future Work While we believe our validation framework as outlined above encourages a rigorous approach to validation, the particular validation plan we carried out in the 20 month project was limited by time. Firstly, a considerably larger test set would be required to ensure every axiom was tested fully. Secondly, all the axioms, not just Conflict and Separation Value Axioms require hand validation. Finally, the scope and operation of formal validation needs to be extended and improved. As well as performing a hand consistency proof, during the project we investigated the feasibility of using the B-Tool [27], a generic proof assistant, to support such activities, with a particular view to determining whether such a tool would be useful in maintaining the CPS. The B-Tool is a proof assistant which arose out of early work on the B methodology by J. R. Abrial [28]. Its main function is that of supporting formal methods experts in constructing proofs to demonstrate important properties of formal specifications. It is a generic tool, in the sense that it can be customised for use with specifications written in various notations: definitions of operators and inference rules used in first order predicate logic are built into the tool, but other definitions and rules can be added by the user. During our feasibility study we used the B-tool to prove completeness and consistency results for vertical separation rules of the CPS. Given this initial success future work could concentrate on providing an automatic link in the FREE to a proof assistant of this sort. Acknowledgements This work was aided by the expertise and advice of members of The Civil Aviation Author- ity. In particular, the interpretation of MATS-2 was aided by the separation rules in informal “production rule” form written by K.McClachlan, by the practical air-traffic control experience of E.Payne, and by R.Thomson’s written answers to queries. Overall liaison and project meetings were organised initially by D.Snowden, and later on by T.Smith. We would also like to thank P.Allen of Huddersfield University for helpful discussions on how $Z$ might have been applied to this application. References
{"Source-Url": "http://helios.hud.ac.uk/scomtlm/Artform/pubs/spe_paper.pdf", "len_cl100k_base": 12804, "olmocr-version": "0.1.53", "pdf-total-pages": 30, "total-fallback-pages": 0, "total-input-tokens": 158669, "total-output-tokens": 15678, "length": "2e13", "weborganizer": {"__label__adult": 0.0004224777221679687, "__label__art_design": 0.0006299018859863281, "__label__crime_law": 0.0004343986511230469, "__label__education_jobs": 0.0027179718017578125, "__label__entertainment": 0.00013303756713867188, "__label__fashion_beauty": 0.0002188682556152344, "__label__finance_business": 0.0007266998291015625, "__label__food_dining": 0.0004162788391113281, "__label__games": 0.0010557174682617188, "__label__hardware": 0.0011148452758789062, "__label__health": 0.0005049705505371094, "__label__history": 0.000469207763671875, "__label__home_hobbies": 0.00014579296112060547, "__label__industrial": 0.0007882118225097656, "__label__literature": 0.0005745887756347656, "__label__politics": 0.0002961158752441406, "__label__religion": 0.0004470348358154297, "__label__science_tech": 0.09063720703125, "__label__social_life": 0.00015532970428466797, "__label__software": 0.0188140869140625, "__label__software_dev": 0.876953125, "__label__sports_fitness": 0.00025391578674316406, "__label__transportation": 0.0018968582153320312, "__label__travel": 0.0002567768096923828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 66117, 0.01318]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 66117, 0.62876]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 66117, 0.92364]], "google_gemma-3-12b-it_contains_pii": [[0, 721, false], [721, 3442, null], [3442, 6035, null], [6035, 9183, null], [9183, 11518, null], [11518, 13690, null], [13690, 15154, null], [15154, 18211, null], [18211, 21061, null], [21061, 24898, null], [24898, 28239, null], [28239, 30825, null], [30825, 32614, null], [32614, 34818, null], [34818, 35704, null], [35704, 38775, null], [38775, 39599, null], [39599, 41939, null], [41939, 42002, null], [42002, 44996, null], [44996, 47644, null], [47644, 47690, null], [47690, 50608, null], [50608, 52939, null], [52939, 55311, null], [55311, 58317, null], [58317, 61039, null], [61039, 61640, null], [61640, 63914, null], [63914, 66117, null]], "google_gemma-3-12b-it_is_public_document": [[0, 721, true], [721, 3442, null], [3442, 6035, null], [6035, 9183, null], [9183, 11518, null], [11518, 13690, null], [13690, 15154, null], [15154, 18211, null], [18211, 21061, null], [21061, 24898, null], [24898, 28239, null], [28239, 30825, null], [30825, 32614, null], [32614, 34818, null], [34818, 35704, null], [35704, 38775, null], [38775, 39599, null], [39599, 41939, null], [41939, 42002, null], [42002, 44996, null], [44996, 47644, null], [47644, 47690, null], [47690, 50608, null], [50608, 52939, null], [52939, 55311, null], [55311, 58317, null], [58317, 61039, null], [61039, 61640, null], [61640, 63914, null], [63914, 66117, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 66117, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 66117, null]], "pdf_page_numbers": [[0, 721, 1], [721, 3442, 2], [3442, 6035, 3], [6035, 9183, 4], [9183, 11518, 5], [11518, 13690, 6], [13690, 15154, 7], [15154, 18211, 8], [18211, 21061, 9], [21061, 24898, 10], [24898, 28239, 11], [28239, 30825, 12], [30825, 32614, 13], [32614, 34818, 14], [34818, 35704, 15], [35704, 38775, 16], [38775, 39599, 17], [39599, 41939, 18], [41939, 42002, 19], [42002, 44996, 20], [44996, 47644, 21], [47644, 47690, 22], [47690, 50608, 23], [50608, 52939, 24], [52939, 55311, 25], [55311, 58317, 26], [58317, 61039, 27], [61039, 61640, 28], [61640, 63914, 29], [63914, 66117, 30]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 66117, 0.0]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
ef9d25e693d9e2f1ee22642bbbcb0769430f1e54
From a Monolithic Big Data System to a Microservices Event-Driven Architecture Nunes Laigner, Rodrigo; Kalinowski, Marcos; Diniz, Pedro; Barros, Leonardo; Cassino, Carlos; Lemos, Melissa; Arruda, Darlan; Lifschitz, Sérgio; Zhou, Yongluan Published in: Proceedings of 46th Euromicro Conference on Software Engineering and Advanced Applications Publication date: 2020 Document version Publisher's PDF, also known as Version of record Document license: Unspecified Citation for published version (APA): From a Monolithic Big Data System to a Microservices Event-Driven Architecture Rodrigo Laigner Department of Computer Science (DIKU) University of Copenhagen, Denmark rnl@di.ku.dk Marcos Kalinowski, Pedro Diniz Informatics Department PUC-Rio, Brazil {kalinowski,pfonseca}@inf.puc-rio.br {barros,cassino,melissa}@tecgraf.puc-rio.br Leonardo Barros, Carlos Cassino, Melissa Lemos Tecgraf/PUC-Rio, Brazil Darlan Arruda Department of Computer Science Western University, Canada darruda3@uwo.ca Sérgio Lifschitz Informatics Department PUC-Rio, Brazil sergio@inf.puc-rio.br Yongluan Zhou Department of Computer Science (DIKU) University of Copenhagen, Denmark zhou@di.ku.dk Abstract—[Context] Data-intensive systems, a.k.a. big data systems (BDS), are software systems that handle a large volume of data in the presence of performance quality attributes, such as scalability and availability. Before the advent of big data management systems (e.g., Cassandra) and frameworks (e.g., Spark), organizations had to cope with large data volumes with custom-tailored solutions. In particular, a decade ago, Tecgraf/PUC-Rio developed a system to monitor truck fleet in real-time and proactively detect events from the positioning data received. Over the years, the system evolved into a complex and large obsolescent code base involving a costly maintenance process. [Goal] We report our experience on replacing a legacy BDS with a microservice-based event-driven system. [Method] We applied action research, investigating the reasons that motivate the adoption of a microservice-based event-driven architecture, intervening to define the new architecture, and documenting the challenges and lessons learned. [Results] We perceived that the resulting architecture enabled easier maintenance and fault-isolation. However, the myriad of technologies and the complex data flow were perceived as drawbacks. Based on the challenges faced, we highlight opportunities to improve the design of big data reactive systems. [Conclusions] We believe that our experience provides helpful takeaways for practitioners modernizing systems with data-intensive requirements. Index Terms—big data system, microservices, event-driven I. INTRODUCTION Data has been generated at an increasingly higher pace over the last years. Social media interactions, sensors, mobile phones, and business processes are examples of sources. Surveys indicate that 2.5 quintillion bytes of data are generated each day, which will lead to approximately 79.4 zettabytes of data by 2025 [1], [2]. This context made the case for the design of big data systems (BDS), which arose to handle the collection and manipulation of large volumes of data in modern business applications. Gorton and Klein [3] define BDS as “distributed systems that include redundant processing nodes, replicated storage, and frequently execute on a shared cloud infrastructure [...] employing a heterogeneous mix of SQL, NoSQL, and NewSQL technologies.” As a result, the development of BDS often imposes challenges to software engineers, as noted by Hummel et al. [4], which cataloged a set of challenges, such as steep learning curve and complex data processing. Besides, Laigner et al. [5] found that the major challenges on developing BDS are about software architecture design. Event-driven systems and microservices have emerged as compelling architectural paradigms to the development of data-driven software applications [6], [7]. Microservices are small scalable units where each represent a bounded business capability that are often autonomously deployed. In contrast to traditional monolithic systems, microservices do not share resources, communicating mainly via message-passing semantics [8]. In line with microservices, an event-driven architecture (EDA) is comprised by a set of high-cohesive components that asynchronously react to events to perform a specific task [9]. In this paper, we report the complete replacement process of a legacy BDS to a microservice-based event-driven architecture. The replacement comprised a 19-month long development period that took place at PUC-Rio’s Tecgraf Institute, which provides technical and scientific solutions for a wide range of strategic industrial partners. One of the solutions, developed for a customer in the Oil & Gas sector back in 2008, concerns a monolithic BDS that monitors moving objects (MOs) and proactively detects events that incur in risks to the operation, such as vehicle route deviations. Over the years, the system evolved into a complex and large obsolescent code base that involves a difficult maintenance process. In this context, in 2018, with the advent of a new industrial partner interested in the outcomes of the previous project that employed the legacy BDS, Tecgraf’s managers decided to take advantage of a new contract to accommodate a complete rewrite of the legacy BDS by adopting current big data technologies, such as Cassandra and Kafka. Furthermore, based on the lessons learned of the legacy BDS, Tecgraf’s managers decided that the new project must adopt a microservice-based EDA. Thus, we investigate the integration of microservices and EDA to support data-intensive requirements. The main contributions of this paper are: (i) an investigation on the motivation to adopt a microservice-based EDA; (ii) a 19-month experi- ence report on replacing a legacy BDS with a microservices-based EDA; (iii) a discussion of the obtained results in form of challenges and lessons learned. The remainder of this paper is organized as follows. Section II provides the background of this work. Next, the action research design is presented, describing the goal, research questions, and methodology. The results are presented in Section IV. Lastly, Section V presents the concluding remarks. II. BACKGROUND A. Big data systems Chen et al. [10] explain that traditional software development is characterized by “structured, batch-oriented, relational small data [volume],” and straightforward development lifecycle and architecture design. Besides, Gorton and Klein [3] argue that traditional business systems are “relatively well constrained in terms of data growth, analytics, and scale.” On the other side, Gorton and Klein [3] synthesize BDS based on four requirements: (i) write-heavy workload; (ii) variable request loads (adding new resources and release them as necessary); (iii) computation-intensive analytics (diverse query workloads and varying latency demands); and (iv) high availability. These requirements represent a significant shift from traditional business systems. B. Microservices Software systems have traditionally adopted a monolithic architectural style, on which modules and/or subsystems are integrated and cooperate in a centralized manner. According to Bucchiarone et al. [8], in such architecture, “the modularity abstractions rely on the sharing of resources of the same machine [...], and the components are therefore not independently executable.” However, concerns related to the complexity involved on scaling monolithic architectures [8] and aspects related to change, such as evolutionary maintenance [11], have shifted interests in industry towards the adoption of decoupled architectures. Built on SOA principles of loosely-coupled services, microservices have emerged as an “organic implementation approach to SOA, encompassing polyglot programming in multiple paradigms and languages, and design for failure; decentralization and automation” [12]. C. Event-driven architecture Systems that adopt an EDA, also known as reactive systems, are a current subject of interest in the development of data-driven software systems [6]. According to Richards [9], EDA is a pattern “made up of highly decoupled, single-purpose event processing components that asynchronously receive and process events”. Richards argues that “event processors are self-contained, independent, highly decoupled architecture components that perform a specific task in the application or system” [9]. Therefore, in EDA, each single-purpose service employs programming primitives for enabling reaction and response to a set of predefined events. To the best of our knowledge, literature does not clearly differentiate these from microservices. III. RESEARCH DESIGN Our study design follows the Action Research (AR) [13] methodology. The study context, goal and research questions, and methodology are presented hereafter. A. Context This study reports on the process of replacing a legacy big data system with a microservice-based EDA. The experience described herein occurred in the context of PUC-Rio’s Tecgraf Institute. Tecgraf is a mid-size non-profit research and development organization that conducts projects with industrial partners and government institutions. Our subject legacy system is a large-size BDS that had been under active development from 2008 to 2014. Figure 1 shows a high level view of the legacy system. MOs, such as vehicles, have tracking devices installed, so that positioning data (PD) are sent periodically. Every PD is sent to an information flow processing (IFP) engine [14], which analyzes them to uncover non-conformities, such as a vehicle route deviation. Then, the streams are enriched with domain data and presented to users. Over the years, the system has undergone a natural process of corrosion, on which the large source code became difficult to maintain due to the complexity of the system. Besides, the technology stack became obsolete, outpaced by current technologies, and the monolithic structure undermined the introduction of new technologies. Thus, in the advent of a new industrial partner with a closely related problem context, Tecgraf’s managers realized that the process of recruiting and training new developers that would be able to implement a new instance of the legacy BDS was not feasible. Also, with the myriad of big data technologies that emerged in the last decade, such as Cassandra and Kafka, and the mentioned drawbacks found in the legacy BDS, Tecgraf’s managers decided that the best approach would be designing a new architecture from scratch, based on microservices and event-driven principles. Besides, the new architecture should embrace widely adopted open-source technologies instead of relying on in-house solutions. B. Goal and Research Questions Developing a BDS poses several challenges to developers, such as steep learning curves, lack of modeling and debugging support, and data consistency [4]. Moreover, the recent trend towards adoption of microservices architectures has shown that without a careful design, drawbacks related to redundancy and data consistency may emerge [11]. Albeit there is substantial body of work reporting on microservices decomposition [8], [11], designing a microservice-based system without decomposing an existing monolithic system is not substantially covered in the literature [11]. Besides, to the best of our knowledge, there is no work that reports the challenges and lessons learned on replacing a legacy BDS with a new microservices EDA. Thus, our goal is to report the experience of replacing a legacy BDS with a microservices-based EDA without decomposing the existing system. To achieve our goal, we derived three Research Questions (RQs), which are detailed hereafter. **RQ1. What are the reasons that motivate the adoption of a microservice-based EDA to replace a legacy big data system?** This first research question intends to comprehend the reasons that motivate the adoption of architectural alternatives different from the one found in the legacy big data system, particularly regarding EDA and MS architectural style. While there are different works on the motivations for migrating to MS-based architectures, we wanted to understand the specific motivations of our context. **RQ2. What are the benefits and limitations perceived on replacing a legacy big data system with a microservice-based EDA?** The second research question explores the perceived benefits and limitations (post-development) related to the adoption of a new microservice-based EDA. We aim to uncover the technical decisions that the development team has taken that derived drawbacks and positive results. **RQ3. What are the challenges faced and lessons learned while replacing a legacy big data system with a microservice-based EDA?** The third research question concerns unveiling the challenges and lessons learned that were perceived throughout the development process. ### C. Method This section presents the research method employed to answer our research questions. The organization of the method follows the template of Santos and Travassos [13], which suggest one AR stage per section. Figure 2 depicts the AR process based on Davidson et al. [15]. The process starts with a diagnosis of the problem, followed by a plan to address the issue being investigated. Then, the plan is put into practice in the intervention phase. Lastly, an evaluation and analysis is carried out. Although the AR process allows for iterating through phases to achieve results incrementally, this study reports a full cycle of the methodology. 1) **Diagnostic:** Santos and Travassos [13] argue that this stage “consists of exploring the research field, stakeholders and their expectations”. Thus, as this phase naturally maps to answering RQ1, we have designed an exploratory survey to collect the expectation of Tecgraf’s main stakeholders involved in the project. In this survey, we aim to report on the drivers of the technical decision on moving towards a microservices EDA and obtain a view about the drawbacks found in the legacy BDS. Besides, the survey results allow to better understand the problem context and to cross-validate the findings of subsequent steps of the AR process. The target population of this study is composed by product managers, software architects, and developers of Tecgraf institute that have contributed to the decision about replacing the legacy BDS. The following questions compose our survey: (Q1) What are the drawbacks found in the legacy BDS that motivate the substitution? (Q2) What are the drivers for defining event-driven microservices as a target architecture? (Q3) What characteristics of the legacy BDS are important to remain in the target system? (Q4) What challenges would you expect to encounter in replacing a legacy BDS to a microservice-based EDA? (Q1)-(Q3) are defined with the goal to gather information on the legacy system and to extract requirements that must remain in the new architecture. (Q4) aims at gathering the perception of the stakeholders over the challenges incurred by the replacement process. Thus, we can cross-check if the expectations on challenges are met at the end. In addition, regarding the BDS, we conduct an analysis of the documentation, inspection of the source code and historical commits to uncover technical challenges faced by developers at the time. 2) **Planning:** This stage concerns the definition of actions to be taken onward. A component of this phase is conducting a literature survey for the purpose of examining the research theme [13]. Thus, we report our searching process on the aforementioned themes and the sequenced set of activities to carry out during intervention step. 3) **Intervention:** According to Santos and Travassos [13], this phase concerns the implementation of planned actions, which are depicted “in a chronological way, describing how the activities were performed during the research.” In this phase, data collection is a product of our experience playing the co-located role of software architects within Tecgraf/PUC-Rio. During this period, we have analyzed the results of several meetings, emails, interviews, and technical documents, such as use cases and user interface screen prototypes, in order to confirm our findings regarding the process of replacing a legacy BDS. The material collection was conducted between July 2018 and January 2020. Through intervention, we seek to unveil the challenges on adopting a microservice-based architecture from scratch (i.e., when no monolithic system is... refactored) with event-driven requirements. The results of this AR step enables us to partially answer RQ2 and RQ3. 4) Evaluation and Reflection: This phase regards analyzing the effects of the actions taken. Although our study provides the point of view of two interveners playing a software architect role, it is worthwhile to enrich our understanding on the effects of the intervention by collecting the perceptions of other developers that also played a role in the project, i.e., contributed to the project code base. Therefore, in order to mitigate risks related to the report of outcomes of the intervention from a single point of view, we designed a survey to gather the perception of other developers about the new architecture. This also allowed us to cross validate our findings to reduce limitations of the study. Through evaluation, we are able to complement our answer to RQ2 and RQ3. IV. RESULTS This section provides detailed discussions on the AR methodology employed at Tecgraf to answer the RQs. A. Diagnostic The diagnosis phase “enables the identification of primary causes and circumstances faced by an organization” [13]. In this section, we analyze the project context by inquiring stakeholders’ expectations and understanding the legacy BDS. 1) Survey with stakeholders: We applied the survey to four Tecgraf stakeholders (SH{1-4}) that were closely involved in the arrangements of the project. Regarding the drawbacks of the legacy BDS, the respondents unanimously agreed on the complexity and obsolescence of the source code. For instance, SH1 argues that the “code was poorly structured and documented,” and “the technology was obsolete.” Next, SH1 asserts that a driver for microservices adoption is that “data could grow rapidly and the performance could be a bottleneck; […] an architecture able to escalate easily was very attractive.” Besides, SH2 highlights that an EDA “facilitate the processing of events from different sources,” “[supporting] the communication and isolation among microservices.” Next, the respondents agreed that the core requirements to remain are georeferenced tracking of MOs, definition of route and deducing microservices from a monolithic system [17], [18], by the time we were investigating approaches to support the process of defining our services, we found that a custom-tailored IFP engine was engineered to monitor MOs (retrieved from in-memory queues) and detect events such as route deviations. Lastly, we found that a custom-tailored IFP engine was engineered to monitor MOs (retrieved from in-memory queues) and detect events such as route deviations. Lastly, the legacy BDS LOC count for 400K, divided into application code and libraries. B. Planning The foundations that guide our planning stage were elucidated in the diagnosis stage. In this stage, we sought to gather knowledge on the research themes by searching the literature. We have submitted searches on Scopus digital library regarding microservices and event-driven architecture. In summary, although studies on microservices are prevalent [8], [16], we found that the problem of defining a microservice (MS) architecture for a new system still an ongoing problem in literature, with no clear guidelines [11]. Hence, we defined a sequence of phases to be followed in a defined timetable. We start with the conception, describing the requirements elicitation process. After, the architecture and design phase are conducted. Lastly, with the architectural blueprint of the system, we were ready to start the implementation phase. C. Intervention In this section we provide an in-depth discussion over the process conducted to replace the legacy BDS with a microservices EDA. As suggested by Santos and Travassos [13], the intervention is described in a chronological way. 1) Conception: The project has started with a researcher (the first author), a project manager (fourth author), a senior developer, and two requirements analysts. Soon, the requirements started to change. By working closely with the industry partner, the requirements team realized that their needs were different from the ones described in the contract. As we aimed a microservice-based architecture, this context has played a role on the process of defining our services. Although there are general guidelines on migration patterns and deducing microservices from a monolithic system [17] [18], by the time we were investigating approaches to support the process of defining our services, we have not found studies focused on how to define a microservices architecture from scratch, i.e., when no monolithic system is decomposed. Literature \cite{17} \cite{18} often cites Domain-Driven Design (DDD) \cite{19} as a compelling technique to identify subdomains of the business, where industry practitioners advocate that each subdomain maps to a bounded context (a deployable unit). However, Evans \cite{19} advocates first for a discovery process of the application domain, where its “understanding comes from diving in, implementing an initial design based on a probably naive model, and then transforming it again and again.” Hence, we followed the advice of starting with a naive model based on the business capabilities (BCs) \cite{20} identified so far (1-month period). A BC, also referred as a bounded context \cite{16}, “is something that a business does in order to generate value,” often representing a delimited domain business model. We documented the conducted requirements gathering meetings and identified four major BCs of the domain, as shown in Figure 3. Our reasoning for defining the BCs is explained as follows. Analysts plan a patrol (trip) composed by a route to be followed and a set of inspections (stop points) to be performed along the work journey. The structure of patrol and verification trips are no different, however, a verification corresponds to an unforeseen inspection triggered by the reception of a denounce, while a patrol is previously defined and scheduled in advance. Also, distinct analyst teams handle patrols’ planning and verifications. Therefore, we defined both as distinct BCs. The Alerts BC is responsible for the ingestion, processing, and exposition of alerts coming from any source. For instance, instruments installed in oil pipelines periodically send alerts concerning suspicious activities, such as manual excavations. Besides, through the mobile app, in-field operators can communicate messages and alerts to analysts in real-time. The Tracking BC is responsible to ingest and process real-time PD of patrols and verifications. An in-field team is assigned to a patrol or a verification and tracking is automatically enabled by the the mobile app they carry in operation. In addition, vehicles assigned to a team also send tracking data. Lastly, the Tracking BC is responsible for storing and serving all trajectory data history of mobile devices and vehicles. 2) Architecture and design: This section discusses how the requirements elicited were translated to an architectural design, which is exhibited in Figure 4. Defining a target stack. The intervention process started at a slow pace due to frequent requirements changes. This context has made the case for focusing on the analysis of suitable technologies for the target architecture. Tecgraf has long-lasting expertise in developing distributed systems with Java. Besides, as the developers were proficient in the language, Java was a natural choice to be the main back-end programming language. As the project comprised the development of a web application with a distributed architectural style, in order to meet stakeholders expectations over embracing open-source technology, instead of writing custom-tailored infrastructure solutions (e.g., logging, data access, and dependence management) from scratch, a reasonable choice was relying on a well-adopted framework to support our development. Thus, we listed a set of capabilities that the framework must deliver: support for the development of REST APIs \cite{21}; support for dependency injection \cite{22}; support for hot reload; embedded support for database access; integrated support for message queuing systems; and support for reactive programming model. Although there are a number of feasible web framework options for Java platform, we decided to select Spring due to its rich ecosystem, composed by multiple integrations built by a supportive community. Besides, support for Spring in Question&Answers communities and extensive online documentation played a role in the decision. As scalability is a major driver of the target architecture, we opted to adopt the database per service pattern \cite{23}. Besides, as the requirements were being progressively elicited, we aimed to get rid of schema changes for each new version. Then, we listed three important features for a default persistence technol- ology for our services: (i) flexible-schema model, (ii) geospatial query support, (iii) high industrial adoption. Then, MongoDB was selected due to its support to geospatial indexes, replication, load balancing, file storage, and the representation of complex associations within a single record. **Defining services corresponding to business capabilities.** Patrol Planning and Verifications, as depicted by Figure 3, comprehend distinct BCs. This comprehension led to designing both as distinct microservices, as shown in Figure 4. In-field teams are either assigned to a planned patrol or a verification, and, through the mobile app, they are able to retrieve data from the respective service in order to support daily operation. As a verification is spawned due to a denounce (i.e., it is not planned daily), the mobile app is programmed to proactively request the existence of an assigned verification in a time interval. In case the team is assigned to a verification, the patrol is paused until the end of the verification operation. Next, we chose to define the Tracking BC as a microservice due to two reasons: (i) guarantee PD durability and (ii) enable retrieval of historical PD. As scalability of PD ingestion and retrieval is a central concern in the architecture, we listed essential quality attributes a data store must deliver in this case: (i) write-heavy workload support, (ii) availability, (iii) scalability, and (iv) consistency. Thus, we surveyed DBMS to compare additional quality attributes. The work of Lourenço et al. [24] gave us a starting point, as shown in Table II. Although not considered a database, but rather a pattern, we found worthwhile to also analyze CQRS [25] as a candidate solution. Given the superior write-performance, we selected Cassandra as our solution to intensive PD ingestion. From the point of view of event sourcing pattern [26], we modeled PD as an event, thus the state of a MO is represented as a sequence of state-changing events (i.e., a historical track). Following the advice of Balalaie et al. [17], which recommend “to start with a low number of services […] and incrementally add more services” as the team understands requirements better, we realized that letting Tracking MS also deal with the reception of data from external sources (e.g., vehicle and mobile app) could compromise its performance on serving tracking historical data. Thus, in order to provide a singular interface for reception of PD, we defined a MS (Signals) that is responsible for abstracting the receiving of PD through a RESTful API by guaranteeing the conformance of the API defined and communicating it to interested services. This choice proved to be right due to scalability requirements entailed by the application, on which we could increase the number of Signals instances to cope with growing number of MOs sending PD without affecting serving historical data. Lastly, we defined a specific MS (Alerts) responsible for reception, processing, and serving alerts. In other words, alerts are events communicated to the system that should be stored consistently and informed to interested services. Thus, we applied the transactional outbox pattern [27], which advocates that a service that publishes a domain event [28] (in our case, an alert) must atomically update the database and publish an event. In the case of a growing number of different classes of events and interested services, it is important to have a unit of scalability which is represented by this MS. **Defining domain events.** Domain events [28] are often employed in EDA to communicate services about a change in the state of a domain object. A domain event, when received by a service, may trigger actions to be performed. A domain event is often published through a communication channel (a.k.a. topic) that is subscribed by interested parties, allowing a low coupled and non blocking message passing [29]. This approach is particularly important in data-intensive systems to avoid polling mechanisms, which may become a bottleneck with time. In our case, domain events were elicited along brainstorming reunions to evolve the domain knowledge, and, due to space constraints, we summarize the main ones in Table I. Based on the work of Brandolini [30], the columns are explained as follows: an Actor is the source object of an action; a Command represent an action triggered by an actor; and an Event is the consequence of an action. As mentioned earlier, a PD received by the system is a domain event that represents that the state of a MO (mobile app or vehicle) is updated. When a PD is received by Signals, it checks the conformance of the PD object and publishes it to a Kafka topic called signals. Although we do not adopt transactional outbox pattern [27] in Signals, we allow for faster communication of the new state to interested services (since we do not wait for a synchronous database operation) by relying on Kafka consistency guarantees, configuring a suitable trade-off when it comes to a (quasi) real-time system. On the other side, every alert processed by the MS Alerts, after stored in the database, is published in a topic called alerts. After analysis (by an analyst), an alert may result in the assignment of a verification to an in-field team. This assignment event is then delivered to the given in-field team through the mobile app. **Serving domain events to end-users.** A user interface application (UI-APP) was developed in Angular [31] to enable analysts to visualize domain events (e.g., alerts) in real-time. We chose not to include programming polling mechanisms in our UI-APP because we envisioned that real-time domain events retrieval from our microservices could experience high latency as the number of users and amount of data evolve. A technology that could intermediate domain events coming from Kafka topics to web browser clients was necessary. Thus, we found that WebSocket [32] make a suitable protocol to handle real-time event-driven communication in the front-end layer by avoiding polling the server for data. <table> <thead> <tr> <th>Actor</th> <th>Command</th> <th>Event</th> </tr> </thead> <tbody> <tr> <td>Mobile app</td> <td>Start patrol</td> <td>Patrol started</td> </tr> <tr> <td>Analyst</td> <td>Assign verification</td> <td>Verification assigned</td> </tr> <tr> <td>Mobile app</td> <td>Start verification</td> <td>Patrol paused</td> </tr> <tr> <td>Mobile app</td> <td>Finish verification</td> <td>Patrol resumed</td> </tr> <tr> <td>MO</td> <td>Update positioning</td> <td>MO state changed</td> </tr> </tbody> </table> **Table I: Domain events identified** <table> <thead> <tr> <th>Actor</th> <th>Command</th> <th>Event</th> </tr> </thead> <tbody> <tr> <td>Processing</td> <td>Register route deviation</td> <td>Route deviation detected</td> </tr> </tbody> </table> Patrols an external call to respective planned route, we found limited support to integrate the same computation. For example, as part of the process of checking real-time trajectory data of MOs against their respective planned route, we found limited support to integrate an external call to Patrols MS API in order to retrieve the planned route. Thus, we built a tailored solution (Processing in Figure 4) that takes advantage of reactive primitives of Spring and in-memory data processing. Thereby, based on a 5-minute time-window, Processing retrieves PD associated with each in-field team (from signals topic) and triggers the route deviation detection computation. If a deviation is detected, a route deviation is registered and the respective event is triggered. The Alerts MS acknowledges the event as a new alert and publishes it in the alerts topic. This separation of concerns allow us to scale separated parts of the system independently. 3) Implementation: In total, an overall effort of more than 7500 hours were employed by the development team, resulting in a system of around 30,000 lines of Java code, 14,000 lines of TypeScript code, and 29 data tables and documents. In average, each MS has 2,500 lines of code. Due to the large number of microservices and supportive technologies, and the lack of knowledge on state-of-the-art DevOps tools, a great effort was put into deployment. For instance, Docker containers were used to package our services. Besides, several fixes that were not expected earlier were implemented only to adapt to Docker deployment. This context makes the case for introducing DevOps earlier in the development process. D. Evaluation and Reflection This section reports the results of survey conducted with developers and discusses challenges and lessons learned. Although the lessons learned are related to the specific action research project context described in this paper, we believe most of them are generalizable to other industrial settings. 1) Survey with developers: As mentioned in Section III-C4, we designed a survey to collect the point of view of three developers that collaborated in the development process. First, we have defined a set of challenges collected along the intervention. Then, we questioned the developers on their agreement and also inquired them about additional challenges. Their perceptions are summarized along the next section. Due to space constraints, survey details are found online [35]. 2) Challenges and lessons learned: Defining microservices. Although Patrols and Verifications represent different domain concepts, which lead to different domain events, designing them as distinct microservices caused the problem of duplicate concepts [19], on which duplicate efforts on each requirement change (as a result of new knowledge acquired) was observed along the development life cycle. As a lesson learned, we suggest following the advice of Fowler [36], which advocates for the Monolithic-First approach, on which a project “shouldn’t start […] with microservices, even if you’re sure your application will be big enough to make it worthwhile.” Waiting for requirements to mature is essential to define microservices properly. However, emerging research discusses model-driven development of microservice-based systems, which may help mitigate some of the impedance on microservices design [37]. Data modeling. The fast-paced development process altogether with the adoption of novel technologies imposed a challenge on getting data modeling right. The distributed architecture forced us to adopt schema-less and denormalized data models, encapsulated through APIs, rather than normalized data models and data consistency guarantees usually found in monolithic systems, in line with Gorton and Klein’s discourse over BDS [3]. Furthermore, even though designing services communication based on domain events augments the expressiveness of the domain, from the point of view of developers, the myriad of services and technologies led to difficulties in troubleshooting problems. The complex data flow entailed by the application often led to misunderstandings and slowed the process of identifying the root cause of errors. Selecting an IFP engine. Attempts to translate our IFP requirements to Flink were unsuccessful (see Section IV-C2). A second problem was relying on a short time window for implementing our IFP use cases. As the number of technologies employed were already large and the team had no previous experience with IFP engines, we realized that the learning curve could compromise subsequent sprints. As a lesson learned, we highlight that the selection of an IFP solution is an architectural decision. It means that the chosen IFP engine should be adherent to the architecture, and not the opposite. Furthermore, we consider Orleans streams [34] a promising candidate for expressing computations that span different items due to its flexible processing engine. --- **TABLE II** TECHNOLOGIES SURVEYED FOR THE TRACKING MICROSERVICE <table> <thead> <tr> <th>Technology</th> <th>Write-Performance</th> <th>Availability</th> <th>Scalability</th> <th>Maintainability</th> <th>Read-Performance</th> <th>Consistency</th> </tr> </thead> <tbody> <tr> <td>CQRS</td> <td>Average</td> <td>Bad</td> <td>Bad</td> <td>Bad</td> <td>Great</td> <td>Good</td> </tr> <tr> <td>CouchDB</td> <td>Below average</td> <td>Great</td> <td>Below average</td> <td>Good</td> <td>Average</td> <td>Great</td> </tr> <tr> <td>MongoDB</td> <td>Below average</td> <td>Below average</td> <td>Below average</td> <td>Average</td> <td>Great</td> <td>Great</td> </tr> <tr> <td>Cassandra</td> <td>Great</td> <td>Great</td> <td>Great</td> <td>Below average</td> <td>Average</td> <td>Great</td> </tr> </tbody> </table> --- **Information flow processing.** Cugola and Margara [14] refer to systems that “require processing continuously flowing data from geographically distributed sources [...] to obtain timely responses to complex queries” as information flow processing (IFP) applications. At first we investigated Flink [33] as our IFP engine due to its ability to compute operations over stream data, like our real-time PD. However, as asserted by Orleans documentation [34], these systems present a “unified data-flow graph of operations that are applied in the same way to all stream items,” thus hindering applying filtering or aggregation operations over different data items in the same computation. For example, as part of the process of checking real-time trajectory data of MOs against their respective planned route, we found limited support to integrate an external call to Patrols MS API in order to retrieve the planned route. Thus, we built a tailored solution (Processing in Figure 4) that takes advantage of reactive primitives of Spring and in-memory data processing. Thereby, based on a 5-minute time-window, Processing retrieves PD associated with each in-field team (from signals topic) and triggers the route deviation detection computation. If a deviation is detected, a route deviation is registered and the respective event is triggered. The Alerts MS acknowledges the event as a new alert and publishes it in the alerts topic. This separation of concerns allow us to scale separated parts of the system independently. **Embracing failure.** Some MS-oriented frameworks (e.g., Spring) fail to present extensive support for failure handling in workflows spanning multiple microservices. For instance, in the absence of distributed transactions, the developer should hard-code logic related to recovering from failures in such workflows. Furthermore, given the low granularity nature of MS instances and the difficulty on reasoning over each MS’ local state globally, we advocate for a programming model that specifies fault-tolerance properties that we can reason about on requests spanning multiple microservices. V. Concluding Remarks This study reports an industrial experience regarding the replacement of a legacy monolithic BDS to an event-driven microservice-based architecture. Microservices promise to automatically react to failure and changing workloads, provide independent deployment, and support for polyglot technologies [7] [12]. EDA commit to enable a reactive programming model among high-cohesive components that proactively react to incoming events by performing a computation or triggering it in another component [9]. However, the joint use of microservices and EDA has not been previously discussed in the context of BDS. Moreover, we present how microservices can be defined without refactoring a legacy monolithic system. From requirements elicitation, through architecture design, and implementation, we provided an example on how a system with data-intensive requirements can benefit from microservices and event-driven principles. The main takeaways from our experience are as follows. Defining microservices too early in the development process may yield into a wrong definition. Besides, in a fast-paced development scenario, waiting for requirements to mature is essential in getting microservices right. On one hand, microservices support for easier maintenance and fault-isolation were perceived as benefits to the architecture. However, the complex data flow entailed by the number of microservices, as well the myriad of technologies were perceived as drawbacks. **References**
{"Source-Url": "https://static-curis.ku.dk/portal/files/245635593/SEAA_2020.pdf", "len_cl100k_base": 8420, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 26590, "total-output-tokens": 10797, "length": "2e13", "weborganizer": {"__label__adult": 0.000331878662109375, "__label__art_design": 0.0005660057067871094, "__label__crime_law": 0.00024819374084472656, "__label__education_jobs": 0.0009379386901855468, "__label__entertainment": 6.496906280517578e-05, "__label__fashion_beauty": 0.00016939640045166016, "__label__finance_business": 0.00032639503479003906, "__label__food_dining": 0.00030231475830078125, "__label__games": 0.0004777908325195313, "__label__hardware": 0.0011043548583984375, "__label__health": 0.0004124641418457031, "__label__history": 0.0003447532653808594, "__label__home_hobbies": 8.535385131835938e-05, "__label__industrial": 0.00047206878662109375, "__label__literature": 0.0002409219741821289, "__label__politics": 0.0002713203430175781, "__label__religion": 0.0004346370697021485, "__label__science_tech": 0.028167724609375, "__label__social_life": 7.587671279907227e-05, "__label__software": 0.005428314208984375, "__label__software_dev": 0.95849609375, "__label__sports_fitness": 0.00024306774139404297, "__label__transportation": 0.0005884170532226562, "__label__travel": 0.0002073049545288086}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47995, 0.02093]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47995, 0.24786]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47995, 0.92029]], "google_gemma-3-12b-it_contains_pii": [[0, 839, false], [839, 6232, null], [6232, 11427, null], [11427, 17053, null], [17053, 21701, null], [21701, 25966, null], [25966, 32728, null], [32728, 39992, null], [39992, 47995, null]], "google_gemma-3-12b-it_is_public_document": [[0, 839, true], [839, 6232, null], [6232, 11427, null], [11427, 17053, null], [17053, 21701, null], [21701, 25966, null], [25966, 32728, null], [32728, 39992, null], [39992, 47995, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47995, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47995, null]], "pdf_page_numbers": [[0, 839, 1], [839, 6232, 2], [6232, 11427, 3], [11427, 17053, 4], [17053, 21701, 5], [21701, 25966, 6], [25966, 32728, 7], [32728, 39992, 8], [39992, 47995, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47995, 0.09091]]}
olmocr_science_pdfs
2024-12-05
2024-12-05
317e4efb747597b25288bcae90be8c252c9f8e70
FOLD-R++: A Toolset for Automated Inductive Learning of Default Theories from Mixed Data Huaduo Wang and Gopal Gupta Computer Science Department, The University of Texas at Dallas, Richardson, USA {huaduo.wang,gupta}utdallas.edu Abstract FOLD-R is an automated inductive learning algorithm for learning default rules with exceptions for mixed (numerical and categorical) data. It generates an (explainable) answer set programming (ASP) rule set for classification tasks. We present an improved FOLD-R algorithm, called FOLD-R++, that significantly increases the efficiency and scalability of FOLD-R. FOLD-R++ improves upon FOLD-R without compromising or losing information in the input training data during the encoding or feature selection phase. The FOLD-R++ algorithm is competitive in performance with the widely-used XGBoost algorithm, however, unlike XGBoost, the FOLD-R++ algorithm produces an explainable model. Next, we create a powerful tool-set by combining FOLD-R++ with s(CASP)—a goal-directed ASP execution engine—to make predictions on new data samples using the answer set program generated by FOLD-R++. The s(CASP) system also produces a justification for the prediction. Experiments presented in this paper show that our improved FOLD-R++ algorithm is a significant improvement over the original design and that the s(CASP) system can make predictions in an efficient manner as well. 1 Introduction Dramatic success of machine learning has led to a torrent of Artificial Intelligence (AI) applications. However, the effectiveness of these systems is limited by the machines’ current inability to explain their decisions and actions to human users. That’s mainly because the statistical machine learning methods produce models that are complex algebraic solutions to optimization problems such as risk minimization or geometric margin maximization. Lack of intuitive descriptions makes it hard for users to understand and verify induced models and refine them. The ILP learning problem can be regarded as a search problem for a set of clauses that deduce the training examples. The search is performed either top down or bottom-up. A bottom-up approach builds most-specific clauses from the training examples and searches the hypothesis space by using generalization. This approach is not applicable to large-scale datasets, nor can it incorporate negation-as-failure into the hypotheses. A survey of bottom-up ILP systems and their shortcomings can be found at (Sakama 2005). In contrast, top-down approach starts with the most general clause and then specializes it. A top-down algorithm guided by heuristics is better suited for large-scale and/or noisy datasets (Zeng, Patel, and Page 2014). The FOIL algorithm (Quinlan 1990) by Quinlan is a popular top-down inductive logic programming algorithm that generate logic programs. FOIL uses weighted information gain as the heuristics to guide the search for best literals. The FOLD algorithm by Shakerin (Shakerin, Salazar, and Gupta 2017) is a new top-down algorithm inspired by the FOIL algorithm. It generalizes the FOIL algorithm by learning default rules with exceptions. It does so by first learning the default conclusion that covers positive examples while avoiding negative examples, then next it swaps the positive and negative examples and calls itself recursively to learn the exceptions to the default conclusions. Both FOIL and FOLD cannot deal with numeric features directly; an encoding process is needed in the preparation phase of the training data that discretizes the continuous numbers into intervals. However, this process not only adds a huge computational overhead to the algorithm but also leads to loss of information in the training data. To deal with the above problems, Shakerin developed an extension of the FOLD algorithm, called FOLD-R, to handle mixed (i.e., both numerical and categorical) features which avoids the discretization process for numerical data (Shakerin 2020; Shakerin, Salazar, and Gupta 2017). However, FOLD-R still suffers from efficiency and scalability issues when compared to other popular machine learning systems for classification. In this paper we report on a novel imple- mentation method we have developed to improve the design of the FOLD-R system. In particular, we use the prefix sum technique (Wikipedia contributors 2021) to optimize the process of calculation of information gain, the most time-consuming component of the FOLD family of algorithms (Shakerin 2020). Our optimization, in fact, reduces the time complexity of the algorithm. If \( N \) is the number of unique values from a specific feature and \( n \) is the number of training examples, then the complexity of computing information gain for all the possible literals of a feature is reduced from \( O(M^N) \) for FOLD-R to \( O(M) \) in FOLD-R++. Our experimental results indicate that the FOLD-R++ algorithm is comparable to popular machine learning algorithms such as XGBoost wrt various metrics (accuracy, recall, precision, and F1-score) as well as in efficiency and scalability. However, in addition, FOLD-R++ produces an explainable and interpretable model in the form of an answer set program. This paper makes the following novel contribution: it presents the FOLD-R++ algorithm that significantly improves the efficiency and scalability of the FOLD-R ILP algorithm without adding overhead during pre-processing or losing information in the training data. As mentioned, the new approach is competitive with popular classification models such as the XGBoost classifier (Chen and Guestrin 2016). The FOLD-R++ algorithm outputs an answer set program (ASP) (Gelfond and Kahl 2014) that serves as an explainable/interpretable model. This generated answer set program is compatible with s(CASP) (Arias et al. 2018), a goal-directed ASP solver, that can efficiently justify the prediction generated by the ASP model.1 2 Inductive Logic Programming Inductive Logic Programming (ILP) (Muggleton 1991) is a subfield of machine learning that learns models in the form of logic programming rules (Horn Clauses) that are comprehensible to humans. This problem is formally defined as: **Given** 1. A background theory \( B \), in the form of an extended logic program, i.e., clauses of the form \( h \leftarrow l_1, \ldots, l_m, \text{not } l_{m+1}, \ldots, \text{not } l_n \), where \( l_1, \ldots, l_n \) are positive literals and \( \text{not} \) denotes negation-as-failure (NAF) (Baral 2003; Gelfond and Kahl 2014). We require that \( B \) has no loops through negation, i.e., it is stratified. 2. Two disjoint sets of ground target predicates \( E^+, E^- \) known as positive and negative examples, respectively 3. A hypothesis language of function free predicates \( L \), and a refinement operator \( \rho \) under \( \theta - \text{subsumption} \) (Plotkin 1971) that would disallow loops over negation. **Find** a set of clauses \( H \) such that: - \( \forall e \in E^+, B \cup H \models e \) - \( \forall e \in E^-, B \cup H \not\models e \) - \( B \wedge H \) is consistent. --- 1The s(CASP) system is freely available at https://gitlab.software.imdea.org/ciao-lang/sCASP. 2The FOLD-R++ toolset is available on https://github.com/hwd404/FOLD-R-PP. 3 The FOLD-R++ Algorithm The FOLD algorithm (Shakerin 2020; Shakerin, Salazar, and Gupta 2017) is a top-down ILP algorithm that searches for best literals to add to the body of the clauses for hypothesis, \( H \), with the guidance of an information gain-based heuristic. The FOLD-R++ algorithm\(^2\) refactors the FOLD algorithm and is summarized in algorithm1. The output of the FOLD-R++ algorithm is a set of default rules that include exceptions. An example implied by any rule in the set would be classified as positive. Therefore, the FOLD-R++ algorithm rules out the already covered positive examples in line 5 after learning a new rule. For each rule learning process, a best literal would be selected based on weighted information gain with the current training examples, in line 13, then the examples that cannot be implied by learned default literals would be ruled out for further learning of the current rule. When the information gain becomes zero or the number of negative examples drops below the ratio threshold, the default learning part is done. Unlike the FOIL algorithm, FOLD-R++ next learns exceptions after first learning default literals. This is done by swapping the residual positive and negative examples and calling itself recursively in line 29. The remaining positive and negative examples can be swapped again and exceptions to exceptions learned (and then swapped further to learn exceptions to exceptions of exceptions, and so on). The **ratio** parameter in Algorithm 1 represents the ratio of training examples that are part of the exception to the examples implied by only the default conclusion part of the rule. It will allow us to control the nesting level of exceptions the user wants to permit. **Example 1** In the FOLD-R++ algorithm, the target is to learn rules for \( \text{fly}(X) \). \( B, E^+, E^- \) are background knowledge, positive and negative examples, respectively. \[ \begin{align*} B: & \quad \text{bird}(X) :- \text{penguin}(X). \\ & \quad \text{bird}(\text{tweety}). \quad \text{bird}(\text{et}). \\ & \quad \text{cat}(\text{kitty}). \quad \text{penguin}(\text{polly}). \\ E^+: & \quad \text{fly}(\text{tweety}). \quad \text{fly}(\text{et}). \\ E^-: & \quad \text{fly}(\text{kitty}). \quad \text{fly}(\text{polly}). \end{align*} \] The target predicate \( \{\text{fly}(X) :- \text{true}\} \) is specified when calling the \( \text{learn\_rule} \) function at line 4. The function selects the literal \( \text{bird}(X) \) as result and adds it to the clause \( r = \text{fly}(X) :- \text{bird}(X) \) because it has the best information gain among \( \{\text{bird, penguin, cat}\} \). Then, the training set gets updated to \( E^+ = \{\text{tweety, et}\} \), \( E^- = \{\text{polly}\} \) in line 16-17. The negative example \( \text{polly} \) is still implied by the generated clause and so is a false negative classification. The default learning of \( \text{learn\_rule} \) function is finished because the best information gain of candidate literal is zero. Therefore, the FOLD-R++ function is called recursively with swapped positive and negative examples, \( E^+ = \{\text{polly}\} \), \( E^- = \{\text{tweety, et}\} \), to learn exceptions. In this case, an abnormal predicate \( \{\text{ab0}(X) :- \text{penguin}(X)\} \) is generated and returned as the only exception to the previous learned clause as \( r = \text{fly}(X) :- \text{bird}(X), \text{ab0}(X) \). The abnormal rule \( \{\text{ab0}(X) :- \} \) Algorithm 1 FOLD-R++ Algorithm Input: target, B, E⁺, E⁻, ratio \( \triangleright \) ratio is the exception ratio Output: \( R = \{ r_1, ..., r_n \} \) \( \triangleright \) R is rule set 1: function FOLD-R++(E⁺, E⁻, \( L_{used} \)) 2: \( R \leftarrow \emptyset \) 3: while \(|E⁺| > 0\) do 4: \( r \leftarrow \text{LEARN_RULE}(E⁺, E⁻, \( L_{used} \))\) 5: \( E⁺ \leftarrow E⁺ \setminus \text{covers}(r, E⁺, \text{true})\) 6: \( R \leftarrow R \cup \{ r \} \) 7: end while 8: return \( R \) 9: end function 10: function LEARN_RULE(E⁺, E⁻, \( L_{used} \)) 11: \( L \leftarrow \emptyset \) 12: while \( \text{true} \) do 13: \( l \leftarrow \text{FIND_BEST_LITERAL}(E⁺, E⁻, \( L_{used} \))\) 14: \( L \leftarrow L \cup \{ l \} \) 15: \( r \leftarrow \text{set\_default}(r, L)\) 16: \( E⁺ \leftarrow \text{covers}(r, E⁺, \text{true})\) 17: \( E⁻ \leftarrow E⁻ \setminus \text{covers}(r, E⁻, \text{false})\) 18: if \( l \) is invalid or \(|E⁻| \leq |E⁺| \ast \text{ratio} \) then 19: if \( l \) is invalid then 20: \( L \leftarrow L \setminus \{ l \} \) 21: \( r \leftarrow \text{set\_default}(r, L)\) 22: else 23: \( \text{flag} \leftarrow \text{true} \) 24: break 25: end if 26: end if 27: end while 28: if \( \text{flag} \) then 29: \( AB \leftarrow \text{FOLD-R++}(E⁻, E⁺, \( L_{used} + L \))\) 30: \( r \leftarrow \text{set\_exception}(r, AB)\) 31: end if 32: return \( r \) 33: end function \( \text{penguin}(X) \) is added to the final rule set producing the program below: \( \text{fly}(X) \) :- \( \text{bird}(X) \), not \( \text{ab0}(X) \). \( \text{ab0}(X) \) :- \( \text{penguin}(X) \). We next give more details of the FOLD-R++ algorithm. ### 3.1 Literal Selection The literal selection process for Shakerin’s FOLD-R algorithm can be summarized in Algorithm 2. The FOLD-R algorithm (Shakerin 2020; Shakerin, Salazar, and Gupta 2017) selects the best literal based on the weighted information gain for learning defaults, similar to the original FOLD algorithm described in (Shakerin, Salazar, and Gupta 2017). For numeric features, the FOLD-R algorithm would enumerate all the possible splits. Then, it classifies the data and compute information gain for literals for each split. The literal with the best information gain would be selected as result. In contrast, FOLD-R++ uses a new, more efficient method employing prefix sums to calculate the information gain based on the classification categories. In FOLD-R++, information gain for a given literal is calculated as shown in Algorithm 3. Algorithm 2 FOLD-R Algorithm’s Specialize function 1: function \( \text{SPECIALIZE}(c, E⁺, E⁻) \) 2: \( \text{while} \ \text{size}(\text{E⁻}) > 0 \) do 3: \( (c₁, IG₁) \leftarrow \text{test\_categorical}(c, E⁺, E⁻) \) 4: \( (c₂, IG₂) \leftarrow \text{test\_numeric}(c, E⁺, E⁻) \) 5: if \( IG₁ = 0 \) \& \( IG₂ = 0 \) then 6: \( \hat{c} \leftarrow \text{EXCEPTION}(c, E⁻, E⁺) \) 7: if \( \hat{c} = \text{null} \) then 8: \( \hat{c} \leftarrow \text{enumerate}(c, E⁺) \) 9: end if 10: else 11: if \( IG₁ \geq IG₂ \) then 12: \( \hat{c} \leftarrow c₁ \) 13: else 14: \( \hat{c} \leftarrow c₂ \) 15: end if 16: end if 17: \( E⁻ \leftarrow E⁻ \setminus \text{covers}(\hat{c}, E⁻) \) 18: end while 19: end function Algorithm 3 FOLD-R++ Algorithm, Information Gain function 1: function \( IG(tp, fn, tn, fp) \) 2: if \( fp + fn > tp + tn \) then 3: \( \text{return} -\infty \) 4: end if 5: \( \text{pos, neg} \leftarrow tp + tp + fn \) 6: \( \text{tot} \leftarrow \text{pos} + \text{neg} \) 7: \( \text{result} \leftarrow \left( \frac{\text{tp}}{\text{tot}} \times \log_2 \left( \frac{\text{fp}}{\text{tp}} \right) \right)_{tp > 0} + \left( \frac{\text{fp}}{\text{tot}} \times \log_2 \left( \frac{\text{fp}}{\text{tp}} \right) \right)_{fp > 0} \) 8: \( \text{result} \leftarrow \text{result} + \left( \frac{\text{tn}}{\text{tot}} \times \log_2 \left( \frac{\text{fn}}{\text{tn}} \right) \right)_{tn > 0} + \left( \frac{\text{fn}}{\text{tot}} \times \log_2 \left( \frac{\text{fn}}{\text{tn}} \right) \right)_{fn > 0} \) 9: \( \text{return result} \) 10: end function The variables \( tp, fn, tn, fp \) in Algorithm 3 for finding the information gain represent the numbers of true positive, false positive, true negative, and false negative examples, respectively. With the function above, the new approach employs the prefix sum technique to speed up the calculation. Only one round of classification is needed for a single feature, even with mixed types of values. The new approach to calculate the best IG and literal is summarized in Algorithm 4. Example 2 Given positive and negative examples, \( E⁺, E⁻ \), with mixed type of values on feature \( i \), the target is to find the literal with the best information gain on the given feature. There are 8 positive examples, their values on feature \( i \) are \( \{1, 2, 3, 5, 6, 6, 6\} \). And, the values on feature \( i \) of the 5 negative examples are \( \{2, 4, 6, 7, a\} \). Algorithm 4 FOLD-R++ Algorithm, Best Information Gain function Input: $E^+, E^-, i$ Output: $best, l \triangleright best$: the best IG of feature $i$, $l$: the literal with IG best 1: function BEST_INFO_GAIN($E^+, E^-, i$) 2: \hspace{1em} $pos, neg \leftarrow count\_classification(E^+, E^-, i)$ 3: \hspace{1em} $\triangleright pos, neg$ are dictionaries that holds the # of pos / neg examples for each value 4: \hspace{1em} $xs, cs \leftarrow collect\_unique\_values(E^+, E^-, i)$ 5: \hspace{1em} $\triangleright xs, cs$ are lists that holds the unique numeric and categorical values 6: \hspace{1em} $xp, xn, cp, cn \leftarrow count\_total(E^+, E^-, i)$ 7: \hspace{1em} $\triangleright (xp, xn)$ are the total # of pos / neg examples with numeric value, (cp, cn) are the same for categorical values. 8: \hspace{1em} $xs \leftarrow counting\_sort(xs)$ 9: \hspace{1em} for $j \leftarrow 1$ to size(xs) do 10: \hspace{2em} $pos[xs_i] \leftarrow pos[xs_i] + pos[xs_{i-1}]$ 11: \hspace{2em} $neg[xs_i] \leftarrow neg[xs_i] + neg[xs_{i-1}]$ 12: \hspace{1em} end for 13: for $x \in xs$ do 14: \hspace{2em} $\text{lit}_\text{dict}[ ext{literal}(i, \leq x)] \leftarrow IG(pos[x], xp - pos[x] + cp, xn - neg[x] + cn, neg[x])$ 15: \hspace{2em} $\text{lit}_\text{dict}[ ext{literal}(i, > x)] \leftarrow IG(xp - pos[x], pos[x] + cp, neg[x] + cn, xn - neg[x])$ 16: \hspace{1em} end for 17: for $c \in cs$ do 18: \hspace{2em} $\text{lit}_\text{dict}[ ext{literal}(i, = x)] \leftarrow IG(pos[c], cp - pos[c] + xp, cn - neg[c] + xn, neg[c])$ 19: \hspace{2em} $\text{lit}_\text{dict}[ ext{literal}(i, \neq x)] \leftarrow IG(xp - pos[c], pos[c], neg[c], cn - neg[c], xn)$ 20: \hspace{1em} end for 21: \hspace{1em} $best, l \leftarrow best\_pair(lit\_dict)$ 22: \hspace{1em} return best, l 23: end function With the given examples and specified feature, the numbers of positive examples and negative examples for each unique value are counted first, which are shown as pos, neg at right side of Table 1. Then, the prefix sum arrays are calculated for computing heuristic as pos_sum, neg_sum. Table 2 show the information gain for each literal, the literal $(i, \neq, a)$ has been selected with the highest score. ### 3.2 Justification Explainability is very important for some tasks like loan approval, credit card approval, and disease diagnosis system. Answer set programming provides explicit rules for how a prediction is generated compared to black box models like those based on neural networks. To efficiently justify the prediction, the FOLD-R++ outputs answer set programs that are compatible with the s(CASP) goal-directed ASP system (Arias et al. 2018). **Example 3** The “Titanic Survival Prediction” is a classical classification challenge which contains 891 passengers as training examples and 418 passengers as testing examples and their survival based on features such as sex, age, number of siblings/spouses, number of parents/children, etc. FOLD-R++ generates the following program with only 12 rules: 1. status(X, 0) :- sex(X, 'male'), not ab1(X), not ab3(X), note ab5(X). 2. status(X, 0) :- class(X, '3'), not sex(X, 'male'), fare(X, N4), N4 > 22.5, not ab6(X), not ab7(X). 3. status(X, 0) :- class(X, '3'), not sex(X, 'male'), age(X, N1), N1 > 16.0, number_of_siblings_spouses(X, N2), N2 = < 2.0, fare(X, N4), N4 = < 12.45, N4 = < 18.0, number_of_parents_children(X, N3), N3 = < 1.0, not ab8(X), not ab9(X). 4. status(X, 0) :- number_of_siblings_spouses(X, N2), N2 = < 2.0, fare(X, N4), N4 = < 26.25, age(X, N1), N1 = < 3.0, N1 = < 2.0. 5. ab2(X) :- fare(X, N4), N4 = < 20.0, ab3(X), ab5(X). 6. ab4(X) :- age(X, N1), N1 = < 52.0, fare(X, N4), N4 = < 25.587, N4 = < 26.55, not ab2(X). 7. ab7(X) :- fare(X, N4), N4 = < 31.25, N4 = < 31.387. 8. ab8(X) :- fare(X, N4), N4 = < 15.5, N4 = < 17.4, age(X, N1), N1 = < 24.0. 9. ab9(X) :- age(X, N1), N1 = < 32.0, N1 = < 36.0. Note that status(X, 0) means that person whose id is X perished, while status(X, 1) means that person with id X survived. Note that we don't have any rules generated for <table> <thead> <tr> <th>value</th> <th>≤ value</th> <th>&gt; value</th> <th>= value</th> <th>≠ value</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>−∞</td> <td>-0.664</td> <td>na</td> <td>na</td> </tr> <tr> <td>2</td> <td>−∞</td> <td>-0.666</td> <td>na</td> <td>na</td> </tr> <tr> <td>3</td> <td>-0.619</td> <td>−∞</td> <td>na</td> <td>na</td> </tr> <tr> <td>4</td> <td>-0.661</td> <td>−∞</td> <td>na</td> <td>na</td> </tr> <tr> <td>5</td> <td>-0.642</td> <td>−∞</td> <td>na</td> <td>na</td> </tr> <tr> <td>6</td> <td>-0.616</td> <td>−∞</td> <td>na</td> <td>na</td> </tr> <tr> <td>7</td> <td>-0.661</td> <td>−∞</td> <td>na</td> <td>na</td> </tr> </tbody> </table> Table 1: Left: Examples and values on $i^{th}$ feature value. <table> <thead> <tr> <th>value</th> <th>≤ value</th> <th>&gt; value</th> <th>= value</th> <th>≠ value</th> </tr> </thead> <tbody> <tr> <td>a</td> <td>na</td> <td>na</td> <td>−∞</td> <td>-0.588</td> </tr> <tr> <td>b</td> <td>na</td> <td>na</td> <td>−∞</td> <td>-0.627</td> </tr> </tbody> </table> Table 2: The info gain on $i^{th}$ feature with given examples. status(X, 1), so we could add a rule: status(X, 1) :- not status(X, 0). The above program achieves 0.94 accuracy, 0.97 precision, 0.93 recall, and 0.95 $F_1$ score, which is quite remarkable. Given a new data sample, the predicted answer for this data sample using the above answer set program can be efficiently produced by the s(CASP) system. The s(CASP) system can also produce a justification (a proof tree) for this prediction. Since s(CASP) is query driven, an example query such as \(-\) status(926, S) which checks if passenger with id 926 perished or survived, will succeed if status of passenger 926 is indeed predicted as perished (S is set to 0) by the model represented by the answer set program above. The s(CASP) system can provide a proof for each query. The English description for predicates is also needed to output the proof tree in human readable format. The meaning of predicates in English is given via the \#pred declaration, as shown below via examples: \begin{verbatim} #pred age(X,Y) :: 'person @(X) is of age @(Y)'. #pred number_of_sibling_spouses(X,Y) :: 'person @(X) had @(Y) siblings or spouses'. #pred ab9(X) :: 'abnormal case 9 holds for @(X)'. \end{verbatim} The s(CASP) system can even generate this proof in a human understandable form (Arias et al. 2020). For example, here is the justification tree generated for the passenger with id 926: \begin{verbatim} ?- status(926,X). \% QUERY: I would like to know if 'status' holds (for 926, and X). ANSWER: 1 (in 4.825 ms) JUSTIFICATION_TREE: person 926 perished, because person 926 is male, and there is no evidence that 'abl' holds (for 926), because there is no evidence that person 926 paid Var1 not equal 57.75 for the ticket, and person 926 paid 57.75 for the ticket, and there is no evidence that 'number_of_siblings_spouses' holds (for 926, and Var8), there is no evidence that abnormal case 3 holds for 926, because there is no evidence that 'class' holds (for 926, and 1), there is no evidence that abnormal case 5 holds for 926, because there is no evidence that person 926 is of age Var2 not equal 30, and person 926 is of age 30. The global constraints hold. \end{verbatim} With the justification tree, the reason for the prediction can be easily understood by human beings. The generated ASP rule-set can also be understood by a human. In fact, s(CASP) can print the ASP rules in English, given the description of predicates in English via the \#pred declaration explained above. If there is any unreasonable logic generated in the rule set, it can also be modified directly by the human without retraining. Thus, any bias in the data that is captured in the generated ASP rules can be corrected by the human user, and the updated ASP rule-set used for making new predictions. An example translation for two of the rules (Rules (1) and (12)) above is shown below: \begin{verbatim} (1) person X perished, if person X is male and there is no evidence that 'ab1' holds (for X) and there is no evidence that abnormal case 3 holds for X and there is no evidence that abnormal case 5 holds for X. (12) abnormal case 9 holds for X, if person X is of age Y and Y is greater than 32.0 and person X is of age Y and Y is less or equal 36.0. \end{verbatim} Note that if a data sample is not predicted to hold, because the corresponding query fails on s(CASP), then a justification can be generated by asking the negation of the query. The s(CASP) system supports constructive negation, and thus negated queries can be executed in s(CASP) and their justification/proof generated just as easily as the positive queries. ### 4 Experiments and Performance Evaluation In this section, we present our experiments on UCI standard benchmarks (Lichman 2013). The XGBoost Classifier is popular classification model and used as a baseline in our experiment. We used simple settings for XGBoost classifier without limiting its performance. However, XGBoost cannot deal with mixed type (numerical and categorical) of examples directly. One-hot encoding has been used for data preparation. We use precision, recall, accuracy, $F_1$ score, and execution time to compare the results. FOLD-R++ does not require any encoding before training. The original FOLD-R system used the JPL library with Java implementation. We implemented FOLD-R++ only with Python. To make inferences using the generated rules, we developed a simple ASP interpreter for our application that is part of the FOLD-R++ system. Note that the generated programs are stratified and predicates contain only variables and constants, so implementing an interpreter for such a restricted class in Python is relatively easy. However, for obtaining the justification/proof tree, or for translating the ASP rules into equivalent English text, one must use the s(CASP) system. We also compare the FOLD-R++ algorithm with the RIPPER algorithm (Cohen 1995). RIPPER generates formulas in conjunctive normal form as an explanation of the model. Table 4 shows the comparison for two datasets from the UCI repository (Adult and Credit Card). FOLD-R++ outperforms RIPPER on all categories except precision. Most significantly, FOLD-R++ generates much smaller number of rules. Computation time for FOLD-R++ is also a lot less. As discussed earlier, the time complexity for computing information gain on a feature is significantly reduced in FOLD-R++ due to the use of prefix-sum. Therefore, we obtain a rather large improvements in efficiency. For the credit dataset, a dataset with only 690 instances, the new FOLD-R++ algorithm is hundreds times faster than the original FOLD-R. All the learning experiments have been conducted on a desktop with Intel i5-10400 CPU @ 2.9GHz and 32 GB ram. To measure performance metrics, we conducted 10-fold cross-validation on each dataset and the average of accuracy, precision, recall, and execution time have been presented. Table 3 reports the performance metrics and execution time on each dataset compared with the baseline model. The best performer is highlighted with boldface font. The XGBoost Classifier employs decision tree ensemble method for classification task and provides quite decent performance. FOLD-R++ almost always spends less time to finish learning compared to XGBoost classifier, especially for the large dataset Adult income census. For most of the datasets, FOLD-R++ can achieve equivalent scores. FOLD-R++ achieves much higher scores on ecoli and sonar datasets. For the credit card dataset, the baseline XGBoost model failed training due to 32 GB memory limitation, but FOLD-R++ still finished training quite efficiently. ### 5 Related Work ALEPH (Srinivasan 2001) is one of the most popular ILP system, which induces theories by using bottom-up generalization search. However, it cannot deal with numeric features and its specialization step is manual, there is no automation option. Takemura and Inoue’s method (Takemura and Inoue 2021) relies on tree-ensembles to generate explainable rule sets with pattern mining techniques. Its performance depends on the tree-ensemble model. Additionally, it may not be scalable due to its computational time complexity that is exponential in the number of valid rules. A survey of ILP can be found in (Muggleton et al. 2012). Rule extraction from statistical Machine Learning models has been a long-standing goal of the community. The rule extraction algorithms from machine learning models are classified into two categories: 1) Pedagogical (i.e., learning symbolic rules from black-box classifiers without opening them) 2) Decompositional (i.e., to open the classifier and look into the internals). TREPAN (Craven and Shavlik 1995) is a successful pedagogical algorithm that learns decision trees from neural networks. SVM+Prototypes (Núñez, Angulo, and Catalá 2002) is a decompositional rule extraction algorithm that makes use of KMeans clustering to extract rules from SVM classifiers by focusing on support vectors. Another rule extraction technique that is gaining attention recently is “RuleFit” (Friedman, Popescu, and others 2008). RuleFit learns a set of weighted rules from ensemble of shallow decision trees combined with original features. In ILP community also, researchers have tried to combine statistical methods with ILP techniques. Support Vector ILP (Muggleton et al. 2005) uses ILP hypotheses as kernel in dual form of the SVM algorithm. kFOIL (Landwehr et al. 2006) learns an incremental kernel for SVM algorithm using a FOIL style specialization. nFOIL (Landwehr, Kersting, and Raedt 2005) integrates the Naïve-Bayes algorithm with FOIL. The advantage of our research over all of the above mentioned research work is that we generate answer set programs containing negation-as-fail that correspond closely to the human thought process. Thus, the descriptions are more concise. Second it is scalable thanks to the greedy <table> <thead> <tr> <th>DataSet</th> <th>Shape</th> <th>Acc.</th> <th>Prec.</th> <th>Rec.</th> <th>F1</th> <th>Time (ms)</th> </tr> </thead> <tbody> <tr> <td>acute</td> <td>(120, 7)</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>35</td> </tr> <tr> <td>autism</td> <td>(704, 18)</td> <td>0.97</td> <td>0.98</td> <td>0.98</td> <td>0.97</td> <td>76</td> </tr> <tr> <td>breast-w</td> <td>(699, 10)</td> <td>0.95</td> <td>0.97</td> <td>0.96</td> <td>0.96</td> <td>78</td> </tr> <tr> <td>cars</td> <td>(1728, 7)</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>77</td> </tr> <tr> <td>credit-a</td> <td>(690, 16)</td> <td>0.85</td> <td>0.83</td> <td>0.83</td> <td>0.83</td> <td>368</td> </tr> <tr> <td>ecoli</td> <td>(336, 9)</td> <td>0.76</td> <td>0.76</td> <td>0.62</td> <td>0.68</td> <td>165</td> </tr> <tr> <td>heart</td> <td>(270, 14)</td> <td>0.80</td> <td>0.81</td> <td>0.83</td> <td>0.81</td> <td>112</td> </tr> <tr> <td>ionosphere</td> <td>(351, 35)</td> <td>0.88</td> <td>0.86</td> <td>0.96</td> <td>0.90</td> <td>1,126</td> </tr> <tr> <td>kidney</td> <td>(400, 25)</td> <td>0.98</td> <td>0.98</td> <td>0.98</td> <td>0.98</td> <td>126</td> </tr> <tr> <td>kr vs. kp</td> <td>(3196, 37)</td> <td>0.99</td> <td>0.99</td> <td>0.99</td> <td>0.99</td> <td>210</td> </tr> <tr> <td>mushroom</td> <td>(8124, 23)</td> <td>1</td> <td>1</td> <td>1</td> <td>1</td> <td>378</td> </tr> <tr> <td>sonar</td> <td>(208, 61)</td> <td>0.53</td> <td>0.54</td> <td>0.84</td> <td>0.65</td> <td>1,178</td> </tr> <tr> <td>voting</td> <td>(435, 17)</td> <td>0.95</td> <td>0.94</td> <td>0.95</td> <td>0.94</td> <td>49</td> </tr> <tr> <td>adult</td> <td>(32561, 15)</td> <td>0.86</td> <td>0.88</td> <td>0.94</td> <td>0.91</td> <td>274,655</td> </tr> <tr> <td>credit card</td> <td>(30000, 24)</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> <td>-</td> </tr> </tbody> </table> Table 3: Evaluation of FOLD-R++ on UCI Datasets <table> <thead> <tr> <th>Data</th> <th>Adult</th> <th>Credit card</th> </tr> </thead> <tbody> <tr> <td>Shape</td> <td>(32561, 15)</td> <td>(30000, 24)</td> </tr> <tr> <td>Algo</td> <td>RIPPER</td> <td>FOLD-R++</td> </tr> <tr> <td>Acc.</td> <td>0.70</td> <td>0.84</td> </tr> <tr> <td>Prec</td> <td>0.96</td> <td>0.86</td> </tr> <tr> <td>Rec</td> <td>0.63</td> <td>0.95</td> </tr> <tr> <td>F1</td> <td>0.76</td> <td>0.90</td> </tr> <tr> <td># Rules</td> <td>46.9</td> <td>16.7</td> </tr> <tr> <td>Time</td> <td>59.5s</td> <td>10.1s</td> </tr> </tbody> </table> Table 4: Comparison with RIPPER Algorithm nature of our clause search. 6 Conclusions and Future Work In this paper we presented an efficient and highly scalable algorithm, FOLD-R++, to induce default theories represented as an answer set program. The resulting answer set program has good performance wrt prediction and justification for the predicted classification. In this new approach, unlike other methods, the encoding for data is not needed anymore and no information from training data is discarded. Compared with the popular classification system XGBoost, our new approach has similar performance in terms of accuracy, precision, recall, and F1-score, but better training efficiency. In addition, the FOLD-R++ algorithm produces an explainable model. Predictions made by this model can be computed efficiently and their justification automatically produced using the s(CASP) system. Acknowledgement Authors gratefully acknowledge support from NSF grants IIS 1718945, IIS 1910131, IIP 1916206, and from Amazon Corp, Atos Corp and US DoD. Thanks to Farhad Shakerin for discussions. We are grateful to Joaquín Arias and the s(CASP) team for their work on providing facilities for generating the justification tree and English encoding of rules in s(CASP). References
{"Source-Url": "https://openreview.net/pdf?id=t86FsdFqyxG", "len_cl100k_base": 9800, "olmocr-version": "0.1.53", "pdf-total-pages": 7, "total-fallback-pages": 0, "total-input-tokens": 29540, "total-output-tokens": 11381, "length": "2e13", "weborganizer": {"__label__adult": 0.00036835670471191406, "__label__art_design": 0.0005817413330078125, "__label__crime_law": 0.0006022453308105469, "__label__education_jobs": 0.0022869110107421875, "__label__entertainment": 0.00013387203216552734, "__label__fashion_beauty": 0.00025844573974609375, "__label__finance_business": 0.00054931640625, "__label__food_dining": 0.0004532337188720703, "__label__games": 0.000827789306640625, "__label__hardware": 0.0011730194091796875, "__label__health": 0.0007958412170410156, "__label__history": 0.0003964900970458984, "__label__home_hobbies": 0.00018644332885742188, "__label__industrial": 0.0007596015930175781, "__label__literature": 0.0004444122314453125, "__label__politics": 0.0004520416259765625, "__label__religion": 0.0005550384521484375, "__label__science_tech": 0.264404296875, "__label__social_life": 0.00016367435455322266, "__label__software": 0.0189666748046875, "__label__software_dev": 0.70458984375, "__label__sports_fitness": 0.0003001689910888672, "__label__transportation": 0.0005955696105957031, "__label__travel": 0.00020897388458251953}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 36309, 0.06748]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 36309, 0.49783]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 36309, 0.81116]], "google_gemma-3-12b-it_contains_pii": [[0, 4205, false], [4205, 10749, null], [10749, 15688, null], [15688, 20510, null], [20510, 26057, null], [26057, 31066, null], [31066, 36309, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4205, true], [4205, 10749, null], [10749, 15688, null], [15688, 20510, null], [20510, 26057, null], [26057, 31066, null], [31066, 36309, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 36309, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 36309, null]], "pdf_page_numbers": [[0, 4205, 1], [4205, 10749, 2], [10749, 15688, 3], [15688, 20510, 4], [20510, 26057, 5], [26057, 31066, 6], [31066, 36309, 7]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 36309, 0.14134]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
fae7a663dda90f0b89f93c5b835d1c44e61dee34
2. Application Layer Chapter 2: Application Layer - application - transport - network - link - physical Chapter 2: Application Layer Our Goal: • Conceptual, implementation aspects of network application protocols • Transport-layer service models • Client-server paradigm • Peer-to-peer paradigm • Content distribution networks • Learn about protocols by examining popular application-level protocols • HTTP • SMTP / POP3 / IMAP • DNS • Creating network applications • socket API Chapter 2: Outline - Principles of network applications - Socket Programming with UDP and TCP - Web and HTTP - Electronic Mail (SMTP, POP3, IMAP) - DNS - P2P Applications - Video Streaming and Content Distribution Networks Creating a network app Write programs that: • run on (different) end systems • communicate over network • e.g., web server software communicates with browser software No need to write software for network-core devices • network-core devices do not run user applications • applications on end systems allows for rapid app development, propagation Application architectures Possible structure of applications: • Client-server • Peer-to-peer (P2P) • Hybrid Client-server architecture **server:** always-on host - permanent IP address - data centers for scaling **clients:** - communicate with server - may be intermittently connected - may have dynamic IP addresses - do not communicate directly with each other P2P architecture - *no* always-on server - arbitrary end systems directly communicate - peers request service from other peers, provide service in return to other peers - *self scalability* – new peers bring new service capacity, as well as new service demands P2P architecture - *no* always-on server - arbitrary end systems directly communicate - peers request service from other peers, provide service in return to other peers - *self scalability* – new peers bring new service capacity, as well as new service demands P2P architecture - *no* always-on server - arbitrary end systems directly communicate - peers request service from other peers, provide service in return to other peers - *self scalability* – new peers bring new service capacity, as well as new service demands - Peers are intermittently connected and change IP addresses - *Complex Management* Hybrid of client-server and P2P Skype - Internet telephony app - Finding address of remote party: centralized server(s) - Client-client connection is direct (not through server) Instant messaging - Chatting between two users is P2P - Presence detection/location centralized: - User registers its IP address with central server when it comes online - User contacts central server to find IP addresses of buddies Case Study: Napster Vs Gnutella Any problem with this architecture? Processes communicating *process*: program running within a host - within same host, two processes communicate using *inter-process communication* (defined by OS) - processes in different hosts communicate by exchanging messages clients, servers *client process*: process that initiates communication *server process*: process that waits to be contacted - aside: same process can be both a client and a server for different connections; e.g., in P2P networks Sockets - process sends/receives messages to/from its socket - socket analogous to mailbox - App A puts message in mailbox/socket - App A relies on transport infrastructure to pick up message from A’s socket/mailbox and deliver it to B’s socket Addressing processes • to receive messages, process must have **identifier** • host device has unique 32-bit IP address • **Q:** does IP address of host on which process runs suffice for identifying the process? • **A:** no, *many* processes can be running on same host • **identifier** includes both IP address and port numbers associated with process on host. • example port numbers: • HTTP server: 80 • mail server: 25 • to send HTTP message to gaia.cs.umass.edu web server: • IP address: 128.119.245.12 • port number: 80 • more shortly... App-layer protocol defines - types of messages exchanged, - e.g., request, response - message syntax: - what fields in messages & how fields are delineated - message semantics - meaning of information in fields - rules for when and how processes send & respond to messages open protocols: - defined in RFCs - allows for interoperability - e.g., HTTP, SMTP proprietary protocols: - e.g., Skype What transport service does an app need? **data loss** - some apps (e.g., file transfer, web transactions) require 100% reliable data transfer - other apps (e.g., audio) can tolerate some loss **timing** - some apps (e.g., Internet telephony, interactive games) require low delay to be “effective” **throughput** - some apps (e.g., multimedia) require minimum amount of throughput to be “effective” - other apps (“elastic apps”) make use of whatever throughput they get Why is bandwidth different from timing constraints? ## Transport service requirements: common apps <table> <thead> <tr> <th>application</th> <th>data loss</th> <th>throughput</th> <th>time sensitive</th> </tr> </thead> <tbody> <tr> <td>file transfer</td> <td></td> <td></td> <td></td> </tr> <tr> <td>e-mail</td> <td></td> <td></td> <td></td> </tr> <tr> <td>Web documents</td> <td></td> <td></td> <td></td> </tr> <tr> <td>real-time audio/video</td> <td></td> <td></td> <td></td> </tr> <tr> <td>stored audio/video</td> <td></td> <td></td> <td></td> </tr> <tr> <td>interactive games</td> <td></td> <td></td> <td></td> </tr> <tr> <td>text messaging</td> <td></td> <td></td> <td></td> </tr> </tbody> </table> ## Transport service requirements: common apps <table> <thead> <tr> <th>application</th> <th>data loss</th> <th>throughput</th> <th>time sensitive</th> </tr> </thead> <tbody> <tr> <td>file transfer</td> <td>no loss</td> <td>elastic</td> <td>no</td> </tr> <tr> <td>e-mail</td> <td>no loss</td> <td>elastic</td> <td>no</td> </tr> <tr> <td>Web documents</td> <td>no loss</td> <td>elastic</td> <td>no</td> </tr> <tr> <td>real-time audio/video</td> <td>loss-tolerant</td> <td>audio: 5kbps-1Mbps</td> <td>yes, 100’s msec</td> </tr> <tr> <td></td> <td></td> <td>video: 10kbps-5Mbps</td> <td>yes, few secs</td> </tr> <tr> <td>stored audio/video</td> <td>loss-tolerant</td> <td>same as above</td> <td>yes, 100’s ms</td> </tr> <tr> <td>interactive games</td> <td>loss-tolerant</td> <td>few kbps up</td> <td>yes and no</td> </tr> <tr> <td>text messaging</td> <td>no loss</td> <td>elastic</td> <td>no</td> </tr> </tbody> </table> Internet transport protocols services **TCP service:** - *reliable transport* between sending and receiving process - *flow control:* sender won’t overwhelm receiver - *congestion control:* throttle sender when network overloaded - *does not provide:* timing, minimum throughput guarantee, security - *connection-oriented:* setup required between client and server processes **UDP service:** - *unreliable data transfer* between sending and receiving process - *does not provide:* reliability, flow control, congestion control, timing, throughput guarantee, security, or connection setup, - **Q:** why bother? Why is there a UDP? ## Internet apps: application, transport protocols <table> <thead> <tr> <th>application</th> <th>application layer protocol</th> <th>underlying transport protocol</th> </tr> </thead> <tbody> <tr> <td>e-mail</td> <td>SMTP [RFC 2821]</td> <td></td> </tr> <tr> <td>remote terminal access</td> <td>Telnet [RFC 854], SSH</td> <td></td> </tr> <tr> <td>Web</td> <td>HTTP [RFC 2616]</td> <td></td> </tr> <tr> <td>file transfer</td> <td>FTP [RFC 959]</td> <td></td> </tr> <tr> <td>streaming multimedia</td> <td>HTTP (e.g., YouTube), RTP [RFC 1889]</td> <td></td> </tr> <tr> <td>Internet telephony</td> <td>SIP, RTP, proprietary (e.g., Skype)</td> <td></td> </tr> <tr> <td>naming</td> <td>DNS</td> <td></td> </tr> </tbody> </table> Chapter 2: Outline - Principles of network applications - Socket Programming with UDP and TCP - Web and HTTP - Electronic Mail (SMTP, POP3, IMAP) - DNS - P2P Applications - Video Streaming and Content Distribution Networks Socket programming **goal:** learn how to build client/server applications that communicate using sockets **socket:** door between application process and end-end-transport protocol Two socket types for two transport services: - **UDP**: unreliable datagram (User Datagram Protocol) - **TCP**: reliable, byte stream-oriented (Transmission Control Protocol) Application Example: 1. client reads a line of characters (data) from its keyboard and sends data to server 2. server receives the data and converts characters to uppercase 3. server sends modified data to client 4. client receives modified data and displays line on its screen Socket programming *with UDP* **UDP: no “connection” between client & server** - no handshaking before sending data - sender explicitly attaches IP destination address and port # to each packet - receiver extracts sender IP address and port# from received packet **UDP: transmitted data may be lost or received out-of-order** **Application viewpoint:** - UDP provides *unreliable* transfer of groups of bytes (“datagrams”) between client and server Client/server socket interaction: UDP server (running on serverIP): create socket, port = x: serverSocket = socket(AF_INET, SOCK_DGRAM) read datagram from serverSocket write reply to serverSocket specifying client address, port number client: create socket: clientSocket = socket(AF_INET, SOCK_DGRAM) Create datagram with server IP and port = x; send datagram via clientSocket read datagram from clientSocket close clientSocket Example app: UDP client c UDPClient #include<sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> serverPort = 12000 clientSocket = socket(AF_INET, SOCK_DGRAM, 0) sendto(clientSocket, message, msg_length, dest_info, dest_info_len); close(socket) Server's information **Example app: UDP server** **C UDP Server** ```c #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <arpa/inet.h> #include <netdb.h> serverPort = 12000 serverSocket = socket(AF_INET, SOCK_DGRAM, 0) bind(serverSocket, own_addr_info, addr_len) while (1) { numbytes = recvfrom(serverSocket, buf, buf_len, 0, (struct sockaddr *)&their_addr, &addr_len) print buf } ``` - Local port number 12000 - Create UDP socket FD - Bind to the socket FD - Read from UDP socket into buf, getting client's address (client IP and port) - Who did we receive message from? Socket programming with TCP **client must contact server** - server process must first be running - server must have created socket (door) that welcomes client’s contact **client contacts server by:** - Creating TCP socket, specifying IP address, port number of server process - *when client creates socket:* client TCP establishes connection to server TCP **when contacted by client,** server TCP creates new socket for server process to communicate with that particular client - allows server to talk with multiple clients - source port numbers used to distinguish clients (more in Chap 3) **application viewpoint:** TCP provides reliable, in-order byte-stream transfer (“pipe”) between client and server Client/server socket interaction: TCP **server (running on hostid)** create socket, port=x, for incoming request: serverSocket = socket() wait for incoming connection request connectionSocket = serverSocket.accept() read request from connectionSocket write reply to connectionSocket TCP connection setup **client** create socket, connect to hostid, port=x clientSocket = socket() send request using clientSocket read reply from clientSocket close clientSocket close connectionSocket Example app: TCP client **C TCP Client** ```c #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> serverName = 'servername' serverPort = 12000 clientSocket = socket(AF_INET, SOCK_STREAM, 0) connect(clientSocket, server_info, server_info_len)) numbytes = recv(clientSocket, buf, buf_size, 0) close(clientSocket) ``` create TCP socket Specify SOCK_STREAM Establish connection with server recv() on a connected socket does not need server name numbytes = recv(clientSocket, buf, buf_size, 0) Address and port of server to connect to Example app: TCP server C TCP Server ```c #include <sys/types.h> #include <sys/socket.h> #include <netinet/in.h> #include <netdb.h> #include <arpa/inet.h> serverPort = 12000 serverSocket = socket(AF_INET, SOCK_STREAM, 0) bind(serverSocket, serverAddr, serverAddr_len) listen(serverSocket, queueSize) while 1 { new_fd = accept(serverSocket, theirAddr, theirAddr_len) send(new_fd, "Hello, world!", 13, 0) close(new_fd); } close(serverSocket) ``` Some Details Communication with diverse systems Utility function for address and service translation int getaddrinfo(const char *node, const char *service, const struct addrinfo *hints, struct addrinfo **res); Utility functions for byte-ordering uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); Networks use big-endian byte ordering Little endian Big endian Utility functions for converting IPv4 and IPv6 addresses from binary to text form const char *inet_ntop(int af, const void *src, char *dst, socklen_t size); Chapter 2: Outline ✓ Principles of network applications ◆ Socket Programming with UDP and TCP ◆ Web and HTTP ◆ Electronic Mail (SMTP, POP3, IMAP) ◆ DNS ◆ P2P Applications ◆ Video Streaming and Content Distribution Networks Web and HTTP First, a review... - **web page** consists of **objects** - object can be HTML file, JPEG image, Java applet, audio file,... - web page consists of **base HTML-file** which includes **several referenced objects** - each object is addressable by a **URL**, e.g., ``` www.someschool.edu/someDept/pic.gif ``` - **host name** - **path name** HTTP overview HTTP: hypertext transfer protocol • Web’s application layer protocol • client/server model • **client**: browser that requests, receives, (using HTTP protocol) and “displays” Web objects • **server**: Web server sends (using HTTP protocol) objects in response to requests PC running Firefox browser server running Apache Web server iPhone running Safari browser HTTP overview *Uses TCP:* - client initiates TCP connection (creates socket) to server, port 80 - server accepts TCP connection from client - HTTP messages (application-layer protocol messages) exchanged between browser (HTTP client) and Web server (HTTP server) - TCP connection closed *HTTP is “stateless”* - server maintains no information about past client requests --- *aside* protocols that maintain “state” are complex! - past history (state) must be maintained - if server/client crashes, their views of “state” may be inconsistent, must be reconciled HTTP connections **non-persistent HTTP** - at most one object sent over TCP connection - connection then closed - downloading multiple objects required multiple connections **persistent HTTP** - multiple objects can be sent over single TCP connection between client, server Non-persistent HTTP Suppose user enters URL: www.someSchool.edu/someDepartment/home.index 1a. HTTP client initiates TCP connection to HTTP server (process) at www.someSchool.edu on port 80 2. HTTP client sends HTTP request message (containing URL) into TCP connection socket. Message indicates that client wants object someDepartment/home.index 3. HTTP server receives request message, forms response message containing requested object, and sends message into its socket 4. HTTP server closes TCP connection. 5. HTTP client receives response message containing html file, displays html. Parsing html file, finds 10 referenced jpeg objects 6. Steps 1-5 repeated for each of 10 jpeg objects (contains text, references to 10 jpeg images) Non-persistent HTTP: response time RTT (Round Trip Time): time for a small packet to travel from client to server and back HTTP response time: - one RTT to initiate TCP connection - one RTT for HTTP request and first few bytes of HTTP response to return - file transmission time - non-persistent HTTP response time \[= 2\text{RTT} + \text{file transmission time}\] Persistent HTTP: - server leaves connection open after sending response - subsequent HTTP messages between same client/server sent over open connection - client sends requests as soon as it encounters a referenced object - as little as one RTT for all the referenced objects Other optimizations • Pipelining • Send several requests at once ![Diagram showing pipelining process] Other optimizations • Pipelining • Send several requests at once • HTTP/2 • Push resources Other optimizations • Pipelining • Send several requests at once • HTTP/2 • Push resources • QUIC • Eliminate first RTT HTTP request message - two types of HTTP messages: *request*, *response* - HTTP request message: - ASCII (human-readable format) ``` GET /index.html HTTP/1.1\r\nHost: www-net.cs.umass.edu\r\nUser-Agent: Firefox/3.6.10\r\nAccept: text/html,application/xhtml+xml\r\nAccept-Language: en-us,en;q=0.5\r\nAccept-Encoding: gzip,deflate\r\nAccept-Charset: ISO-8859-1,utf-8;q=0.7\r\nKeep-Alive: 115\r\nConnection: keep-alive\r\n\r\n``` HTTP request message: general format - request line - method sp URL sp version cr lf - header field name : value cr lf - header field name : value cr lf - cr lf - entity body Uploading form input **POST method:** - web page often includes form input - input is uploaded to server in entity body **URL method:** - uses GET method - input is uploaded in URL field of request line: www.somesite.com/animalsearch?monkeys&banana Method types **HTTP/1.0:** - GET - POST - HEAD - asks server to leave requested object out of response **HTTP/1.1:** - GET, POST, HEAD - PUT - uploads file in entity body to path specified in URL field - DELETE - deletes file specified in the URL field HTTP response message status line (protocol status code status phrase) HTTP/1.1 200 OK Date: Sun, 26 Sep 2010 20:09:20 GMT Server: Apache/2.0.52 (CentOS) Last-Modified: Tue, 30 Oct 2007 17:00:02 GMT ETag: "17dc6-a5c-bf716880" Accept-Ranges: bytes Content-Length: 2652 Keep-Alive: timeout=10, max=100 Connection: Keep-Alive Content-Type: text/html; charset=ISO-8859-1 data data data data data data data ... header lines data, e.g., requested HTML file HTTP response status codes - status code appears in 1st line in server-to-client response message. - some sample codes: 200 OK - request succeeded, requested object later in this msg 301 Moved Permanently - requested object moved, new location specified later in this msg (Location:) 400 Bad Request - request msg not understood by server 404 Not Found - requested document not found on this server 505 HTTP Version Not Supported Trying out HTTP (client side) for yourself 1. Telnet to your favorite Web server: telnet gaia.cs.umass.edu 80 opens TCP connection to port 80 (default HTTP server port) at gaia.cs.umass.edu. anything typed in will be sent to port 80 at gaia.cs.umass.edu 2. type in a GET HTTP request: GET /kurose_ross/interactive/index.php HTTP/1.1 Host: gaia.cs.umass.edu by typing this in (hit carriage return twice), you send this minimal (but complete) GET request to HTTP server 3. look at response message sent by HTTP server! (or use Wireshark to look at captured HTTP request/response) User-server state: cookies Many Web sites use cookies four components: 1) Cookie header line of HTTP response message 2) Cookie header line in next HTTP request message 3) Cookie file kept on user’s host, managed by user’s browser 4) Back-end database at Website example: - Susan always access Internet from PC - visits specific e-commerce site for first time - when initial HTTP requests arrives at site, site creates: - unique ID - entry in backend database for ID Cookies: keeping “state” (cont.) client usual http request msg usual http response msg usual http response msg one week later: usual http request msg usual http response msg server Amazon server creates ID 1678 for user create entry backend database cookie-specific action access access cookie-specific action ebay 8734 cookie file ebay 8734 amazon 1678 cookie file set-cookie: 1678 cookie: 1678 cookie: 1678 Cookies (continued) **what cookies can be used for:** - authorization - shopping carts - recommendations - user session state (Web e-mail) **how to keep “state”:** - protocol endpoints: maintain state at sender/receiver over multiple transactions - cookies: http messages carry state **cookies and privacy:** - cookies permit sites to learn a lot about you - you may supply name and e-mail to sites Web caches (proxy server) **goal:** satisfy client request without involving origin server - user sets browser: Web accesses via cache - browser sends all HTTP requests to cache - object in cache: cache returns object - else cache requests object from origin server, then returns object to client More about Web caching - cache acts as both client and server - server for original requesting client - client to origin server - typically cache is installed by ISP (university, company, residential ISP) why Web caching? - reduce response time for client request - reduce traffic on an institution’s access link - Internet dense with caches: enables “poor” content providers to effectively deliver content Caching example: **assumptions:** - avg object size: **1 Mbit** - avg request rate from browsers to origin servers: **15/sec** - avg data rate to browsers: **15 Mbps** - RTT from institutional router to any origin server: **2 sec** - access link rate: **15.4 Mbps** **consequences:** - LAN utilization: **1.5%** - access link utilization = **97%** - total delay = Internet delay + access delay + LAN delay = **2 sec + minutes + usecs** Caching example: **assumptions:** - avg object size: 1 Mbit - avg request rate from browsers to origin servers: 15/sec - avg data rate to browsers: 15 Mbps - RTT from institutional router to any origin server: 2 sec - access link rate: 15.4 Mbps **consequences:** - LAN utilization: 1.5% - access link utilization = 97% 15% - total delay = Internet delay + access delay + LAN delay = 2 sec + minutes + usecs msecs Cost: increased access link speed (not cheap!) Caching example: **assumptions:** - avg object size: **1 Mbit** - avg request rate from browsers to origin servers: **15/sec** - avg data rate to browsers: **15 Mbps** - RTT from institutional router to any origin server: **2 sec** - access link rate: **15.4 Mbps** **consequences:** - LAN utilization: **1.5%** - access link utilization = ?? - total delay = Internet delay + access delay + LAN delay = ?? *How to compute link utilization, delay?* Caching example: Calculating access link utilization, delay with cache: - suppose cache hit rate is 0.4 - 40% requests satisfied at cache, 60% requests satisfied at origin - access link utilization: - 60% of requests use access link - data rate to browsers = 0.6*15 Mbps = 9 Mbps - utilization = 9/15.4 = 0.58 $\rightarrow$ 58% - total delay = 0.6 * (delay from origin servers) + 0.4 * (delay when satisfied at cache) = 0.6 (2.01) + 0.4 (~msecs) = ~ 1.2 secs < less than with 100 Mbps link Cost: web cache (cheap!) Conditional GET - **Goal**: don’t send object if cache has up-to-date cached version - no object transmission delay - lower link utilization - **cache**: specify date of cached copy in HTTP request - `If-modified-since: <date>` - **server**: response contains no object if cached copy is up-to-date: - HTTP/1.0 304 Not Modified HTTP request msg `If-modified-since: <date>` HTTP response HTTP/1.0 304 Not Modified --- HTTP request msg `If-modified-since: <date>` HTTP response HTTP/1.0 304 Not Modified --- HTTP request msg `If-modified-since: <date>` HTTP response HTTP/1.0 200 OK <data> Chapter 2: Outline - Principles of network applications - Socket Programming with UDP and TCP - Web and HTTP - Electronic Mail (SMTP, POP3, IMAP) - DNS - P2P Applications - Video Streaming and Content Distribution Networks Electronic mail Three major components: - user agents - mail servers - SMTP: Simple Mail Transfer Protocol User Agent - a.k.a. “mail reader” - composing, editing, reading mail messages - e.g., Outlook, Thunderbird, iPhone mail client - outgoing, incoming messages stored on server Electronic Mail: Mail Servers Mail Servers: • *mailbox* contains incoming messages for user • *message queue* of outgoing (to be sent) mail messages • *SMTP protocol* between mail servers to send email messages - client: sending mail server - “server”: receiving mail server Electronic Mail: SMTP [RFC 2821] - uses TCP to reliably transfer email message from client to server, port 25 - direct transfer: sending server to receiving server - three phases of transfer - handshaking (greeting) - transfer of messages - closure - command/response interaction (like HTTP) - commands: ASCII text - response: status code and phrase - messages must be in 7-bit ASCII Scenario: Alice sends message to Bob 1) Alice uses UA to compose message “to” bob@someschool.edu 2) Alice’s UA sends message to her mail server; message placed in message queue 3) Client side of SMTP opens TCP connection with Bob’s mail server 4) SMTP client sends Alice’s message over the TCP connection 5) Bob’s mail server places the message in Bob’s mailbox 6) Bob invokes his user agent to read message Sample SMTP interaction S: 220 hamburger.edu C: HELO crepes.fr S: 250 Hello crepes.fr, pleased to meet you C: MAIL FROM: <alice@crepes.fr> S: 250 alice@crepes.fr... Sender ok C: RCPT TO: <bob@hamburger.edu> S: 250 bob@hamburger.edu ... Recipient ok C: DATA S: 354 Enter mail, end with "." on a line by itself C: Do you like ketchup? C: How about pickles? C: . S: 250 Message accepted for delivery C: QUIT S: 221 hamburger.edu closing connection SMTP: final words • SMTP uses persistent connections • SMTP requires message (header & body) to be in 7-bit ASCII • SMTP server uses CRLF.CRLF to determine end of message comparison with HTTP: • HTTP: pull • SMTP: push • both have ASCII command/response interaction, status codes • HTTP: each object encapsulated in its own response message • SMTP: multiple objects sent in multipart message Mail message format SMTP: protocol for exchanging email messages RFC 822: standard for text message format: • header lines, e.g., • To: • From: • Subject: different from SMTP MAIL FROM, RCPT TO: commands! • Body: the “message” • ASCII characters only • **SMTP**: delivery/storage to receiver’s server • mail access protocol: retrieval from server - **POP**: Post Office Protocol [RFC 1939]: authorization, download - **IMAP**: Internet Mail Access Protocol [RFC 1730]: more features, including manipulation of stored messages on server - **HTTP**: gmail, Hotmail, Yahoo! Mail, etc. Chapter 2: Outline ✓ Principles of network applications ✓ Socket Programming with UDP and TCP ✓ Web and HTTP ✓ Electronic Mail (SMTP, POP3, IMAP) ❑ DNS ❑ P2P Applications ❑ Video Streaming and Content Distribution Networks DNS: domain name system **people:** many identifiers: - SSN, name, passport # **Internet hosts, routers:** - IP address (32 bit) - used for addressing datagrams - “name”, e.g., www.yahoo.com - used by humans **Q:** how to map between IP address and name, and vice versa? **Domain Name System:** - *distributed database* implemented in hierarchy of many *name servers* - *application-layer protocol*: hosts, name servers communicate to *resolve* names (address/name translation) - note: core Internet function, implemented as application-layer protocol - complexity at network’s “edge” DNS: services, structure **DNS services** - hostname to IP address translation - host aliasing - canonical, alias names - mail server aliasing - load distribution - replicated Web servers: many IP addresses correspond to one name **why not centralize DNS?** - single point of failure - traffic volume - distant centralized database - maintenance A: *doesn’t scale!* DNS: a distributed, hierarchical database Client wants IP for www.amazon.com; 1st approximation: - Client queries root server to find com DNS server - Client queries .com DNS server to get amazon.com DNS server - Client queries amazon.com DNS server to get IP address for www.amazon.com DNS: Root Name Servers • Contacted by local name server that cannot resolve name • Root name server: • Contacts authoritative name server if name mapping not known • Gets mapping • Returns mapping to local name server 13 logical root name “servers” worldwide • each “server” replicated many times TLD, Authoritative DNS Servers **Top-level domain (TLD) servers:** - responsible for com, org, net, edu, aero, jobs, museums, and all top-level country domains, e.g.: uk, fr, ca, jp - Network Solutions maintains servers for .com TLD - Educause for .edu TLD **Authoritative DNS servers:** Organization’s own DNS server(s), providing authoritative hostname to IP mappings for organization’s named hosts - can be maintained by organization or service provider Local DNS name server • does not strictly belong to hierarchy • each ISP (residential ISP, company, university) has one • also called “default name server” • when host makes DNS query, query is sent to its local DNS server • has local cache of recent name-to-address translation pairs (but may be out of date!) • acts as proxy, forwards query into hierarchy DNS name resolution example - host at cs.stanford.edu wants IP address for csl.illinois.edu **iterated query:** - contacted server replies with name of server to contact - “I don’t know this name, but ask this server” DNS name resolution example **recursive query:** - puts burden of name resolution on contacted name server - heavy load at upper levels of hierarchy? DNS: caching, updating records • once (any) name server learns mapping, it *caches* mapping • cache entries timeout (disappear) after some time (TTL) • TLD servers typically cached in local name servers • thus root name servers not often visited • cached entries may be *out-of-date* (best effort name-to-address translation!) • if name host changes IP address, may not be known Internet-wide until all TTLs expire • update/notify mechanisms proposed IETF standard • RFC 2136 DNS records **DNS:** distributed database storing resource records (RR) RR format: (name, value, type, ttl) - **type=A** - name is hostname - value is IP address - **type=NS** - name is domain (e.g., foo.com) - value is hostname of authoritative name server for this domain - **type=CNAME** - name is alias name for some “canonical” (the real) name - www.ibm.com is really servereast.backup2.ibm.com - value is canonical name - **type=MX** - value is name of mail server associated with name DNS protocol, messages - **query** and **reply** messages, both with same *message format* **message header** - **identification**: 16 bit # for query, reply to query uses same # - **flags**: - query or reply - recursion desired - recursion available - reply is authoritative <table> <thead> <tr> <th>identification</th> <th>flags</th> </tr> </thead> <tbody> <tr> <td># questions</td> <td># answer RRs</td> </tr> <tr> <td># authority RRs</td> <td># additional RRs</td> </tr> </tbody> </table> - questions (variable # of questions) - answers (variable # of RRs) - authority (variable # of RRs) - additional info (variable # of RRs) DNS protocol, messages <table> <thead> <tr> <th>identification</th> <th>flags</th> </tr> </thead> <tbody> <tr> <td># questions</td> <td># answer RRs</td> </tr> <tr> <td># authority RRs</td> <td># additional RRs</td> </tr> </tbody> </table> - name, type fields for a query - RRs in response to query - records for authoritative servers - additional “helpful” info that may be used - questions (variable # of questions) - answers (variable # of RRs) - authority (variable # of RRs) - additional info (variable # of RRs) Inserting records into DNS • example: new startup “Network Utopia” • register name networkuptopia.com at *DNS registrar* (e.g., Network Solutions) • provide names, IP addresses of authoritative name server (primary and secondary) • registrar inserts two RRs into .com TLD server: (networkutopia.com, dns1.networkutopia.com, NS) (dns1.networkutopia.com, 212.212.212.1, A) • create authoritative server type A record for www.networkuptopia.com; type NS record for networkutopia.com Chapter 2: Outline ✓ Principles of network applications ✓ Socket Programming with UDP and TCP ✓ Web and HTTP ✓ Electronic Mail (SMTP, POP3, IMAP) ✓ DNS ❑ P2P Applications ❑ Video Streaming and Content Distribution Networks Pure P2P architecture - no always-on server - arbitrary end systems directly communicate - peers are intermittently connected and change IP addresses examples: - file distribution (BitTorrent) - Streaming (KanKan) - VoIP (Skype) File distribution: client-server vs P2P **Question:** how much time to distribute file (size $F$) from one server to $N$ peers? • peer upload/download capacity is limited resource File distribution time: client-server - **Server transmission**: must sequentially send (upload) \( N \) file copies: - time to send one copy: \( F/u_s \) - time to send \( N \) copies: \( NF/u_s \) - **Client**: each client must download file copy - \( d_{min} = \min \text{ client download rate} \) - min client download time: \( F/d_{min} \) **Time to distribute** \( F \) to \( N \) clients using client-server approach \[ D_{c-s} \geq \max\{NF/u_s, F/d_{min}\} \] increases linearly in \( N \) File distribution time: P2P - **Server transmission**: must upload at least one copy - time to send one copy: \( F/u_s \) - **Client**: each client must download file copy - min client download time: \( F/d_{\text{min}} \) - **Clients**: must download total \( NF \) bits \( \rightarrow \) still need to upload \( NF \) bits - max upload rate (limiting max download rate) is \( u_s + \sum u_i \) --- \[ D_{P2P} \geq \max\{F/u_s, F/d_{\text{min}}, NF/(u_s + \sum u_i)\} \] increases linearly in \( N \) ... ... but so does this, as each peer brings service capacity Client-server vs. P2P: example client upload rate = \( u \), \( F/u = 1 \) hour, \( u_s = 10u \), \( d_{min} \geq u_s \) P2P file distribution: BitTorrent - File divided into 256Kb chunks - Peers in torrent send/receive file chunks **tracker**: tracks peers participating in torrent **torrent**: group of peers exchanging chunks of a file Alice arrives ... ... obtains list of peers from tracker ... and begins exchanging file chunks with peers in torrent P2P file distribution: BitTorrent • peer joining torrent: • has no chunks, but will accumulate them over time from other peers • registers with tracker to get list of peers, connects to subset of peers (“neighbors”) - while downloading, peer uploads chunks to other peers - peer may change peers with whom it exchanges chunks - **churn**: peers may come and go - once peer has entire file, it may (selfishly) leave or (altruistically) remain in torrent BitTorrent: requesting, sending file chunks **requesting chunks:** - at any given time, different peers have different subsets of file chunks - periodically, Alice asks each peer for list of chunks that they have - Alice requests missing chunks from peers, rarest first **sending chunks: tit-for-tat** - Alice sends chunks to those four peers currently sending her chunks *at highest rate* - other peers are choked by Alice (do not receive chunks from her) - re-evaluate top 4 every 10 secs - every 30 secs: randomly select another peer, starts sending chunks - “optimistically unchoke” this peer - newly chosen peer may join top 4 (1) Alice “optimistically unchokes” Bob (2) Alice becomes one of Bob’s top-four providers; Bob reciprocates (3) Bob becomes one of Alice’s top-four providers higher upload rate: find better trading partners, get file faster! Distributed Hash Table (DHT) • Hash table • DHT paradigm • Circular DHT and overlay networks • Peer churn Simple Database Simple database with \( (\text{key, value}) \) pairs: - key: human name; value: social security # <table> <thead> <tr> <th>Key</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>John Washington</td> <td>132-54-3570</td> </tr> <tr> <td>Diana Louise Jones</td> <td>761-55-3791</td> </tr> <tr> <td>Xiaoming Liu</td> <td>385-41-0902</td> </tr> <tr> <td>Rakesh Gopal</td> <td>441-89-1956</td> </tr> <tr> <td>Linda Cohen</td> <td>217-66-5609</td> </tr> <tr> <td>.......</td> <td>........</td> </tr> <tr> <td>Lisa Kobayashi</td> <td>177-23-0199</td> </tr> </tbody> </table> - key: movie title; value: IP address Hash Table - More convenient to store and search on numerical representation of key - $key = \text{hash}(\text{original key})$ <table> <thead> <tr> <th>Original Key</th> <th>Key</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>John Washington</td> <td>8962458</td> <td>132-54-3570</td> </tr> <tr> <td>Diana Louise Jones</td> <td>7800356</td> <td>761-55-3791</td> </tr> <tr> <td>Xiaoming Liu</td> <td>1567109</td> <td>385-41-0902</td> </tr> <tr> <td>Rakesh Gopal</td> <td>2360012</td> <td>441-89-1956</td> </tr> <tr> <td>Linda Cohen</td> <td>5430938</td> <td>217-66-5609</td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td></td> <td></td> <td></td> </tr> <tr> <td>Lisa Kobayashi</td> <td>9290124</td> <td>177-23-0199</td> </tr> </tbody> </table> Distributed Hash Table (DHT) • Distribute (key, value) pairs over millions of peers • pairs are evenly distributed over peers • Any peer can query database with a key • database returns value for the key • To resolve query, small number of messages exchanged among peers • Each peer only knows about a small number of other peers • Robust to peers coming and going (churn) Assign key-value pairs to peers • rule: assign key-value pair to the peer that has the closest ID. • convention: closest is the immediate successor of the key. • e.g., ID space {0,1,2,3,...,63} • suppose 8 peers: 1, 12, 13, 25, 32, 40, 48, 60 • If key = 51, then assigned to peer 60 • If key = 60, then assigned to peer 60 • If key = 61, then assigned to peer 1 Silly Strawman Circular DHT - each peer *only* aware of immediate successor and predecessor. “overlay network” Resolving a query What is the value associated with key 53? $O(N)$ messages on average to resolve query, when there are $N$ peers Circular DHT with shortcuts (Chord) - each peer keeps track of IP addresses of predecessor, successor, short cuts. - reduced from 6 to 3 messages. - possible to design shortcuts with $O(\log N)$ neighbors, $O(\log N)$ messages in query What is the value for key 53 Value for key 53 is 102 Peer churn handling peer churn: - peers may come and go (churn) - each peer knows address of its two successors - each peer periodically pings its two successors to check aliveness - if immediate successor leaves, choose next successor as new immediate successor example: peer 5 abruptly leaves Peer churn handling peer churn: - peers may come and go (churn) - each peer knows address of its two successors - each peer periodically pings its two successors to check aliveness - if immediate successor leaves, choose next successor as new immediate successor example: peer 5 abruptly leaves - peer 4 detects peer 5’s departure; makes 8 its immediate successor - 4 asks 8 who its immediate successor is; makes 8’s immediate successor its second successor. P2P Motivation – Revisited • Client-server dominates the mainstream. Why? • Performance • Economies of scale • Round trip times • (Mass file distribution is a rare exception) • So, why peer-to-peer? • Avoid single points of failure • Technological • ...and social ▸ Robustness, Survivability ▸ Power to the people Chapter 2: Outline ✓ Principles of network applications ✓ Socket Programming with UDP and TCP ✓ Web and HTTP ✓ Electronic Mail (SMTP, POP3, IMAP) ✓ DNS ✓ P2P Applications ☐ Video Streaming and Content Distribution Networks Video Streaming and CDNs: context - video traffic: major consumer of Internet bandwidth - Netflix, YouTube: 37%, 16% of downstream residential ISP traffic - ~1B YouTube users, ~75M Netflix users - challenge: scale - how to reach ~1B users? - single mega-video server won’t work (why?) - challenge: heterogeneity - different users have different capabilities (e.g., wired versus mobile; bandwidth rich versus bandwidth poor) - solution: distributed, application-level infrastructure Multimedia: video • video: sequence of images displayed at constant rate - e.g., 24 images/sec • digital image: array of pixels - each pixel represented by bits • coding: use redundancy \textit{within} and \textit{between} images to decrease \# bits used to encode image - spatial (within image) - temporal (from one image to next) \textit{spatial coding example:} instead of sending \(N\) values of same color (all purple), send only two values: color value \textit{(purple)} \textit{and number of repeated values (N)} \textit{temporal coding example:} instead of sending complete frame at \(i+1\), send only differences from frame \(i\) Multimedia: video - **CBR: (constant bit rate):** video encoding rate fixed - **VBR: (variable bit rate):** video encoding rate changes as amount of spatial, temporal coding changes **examples:** - MPEG 1 (CD-ROM) 1.5 Mbps - MPEG2 (DVD) 3-6 Mbps - MPEG4 (often used in Internet, < 1 Mbps) **spatial coding example:** instead of sending $N$ values of same color (all purple), send only two values: color value (purple) and number of repeated values ($N$) **temporal coding example:** instead of sending complete frame at $i+1$, send only differences from frame $i$ Video Compression - Color Encoding: ![RGB](image1) ![YUV](image2) - ![RGB](image3) - ![YUV](image4) Video Compression - Color Encoding: YUV 4:2:0 Sampling Video Compression - Spatial Encoding: Frequency domain compression using DCT (Discrete Cosine Transform) Video Compression - Spatial Encoding: Frequency domain compression using DCT (Discrete Cosine Transform) - Divide into 8x8 Blocks - Take DCT of block - Quantize and Compress block Video Compression - Spatial Encoding: Frequency domain compression using DCT (Discrete Cosine Transform) - Divide into 8x8 Blocks - Take DCT of block - Quantize and Compress block Resolution 720x572 pixels Block at 8x8 pixels Color value matrix DCT coefficients Video Compression - Temporal Encoding: Motion Compensation Video Compression - Temporal Encoding: Motion Compensation ![Diagram showing motion compensation in video compression](image) Video Compression - Temporal Encoding: Motion Compensation Streaming stored video: simple scenario: video server (stored video) Streaming multimedia: DASH - **DASH**: Dynamic, Adaptive Streaming over HTTP **server:** - divides video file into multiple chunks - each chunk stored, encoded at different rates - **manifest file**: provides URLs for different chunks **client:** - periodically measures server-to-client bandwidth - consulting manifest, requests one chunk at a time - chooses maximum coding rate sustainable given current bandwidth - can choose different coding rates at different points in time (depending on available bandwidth at time) Streaming multimedia: DASH - **DASH**: Dynamic, Adaptive Streaming over HTTP - “intelligence” at client: client determines - *when* to request chunk (so that buffer starvation, or overflow does not occur) - *what encoding rate* to request (higher quality when more bandwidth available) - *where* to request chunk (can request from URL server that is “close” to client or has high available bandwidth) Content distribution networks • **challenge:** how to stream content (selected from millions of videos) to hundreds of thousands of simultaneous users? • **option 1:** single, large “mega-server” • single point of failure • point of network congestion • long path to distant clients • multiple copies of video sent over outgoing link ....quite simply: this solution *doesn’t scale* Content distribution networks • **challenge**: how to stream content (selected from millions of videos) to hundreds of thousands of simultaneous users? • **option 2**: store/serve multiple copies of videos at multiple geographically distributed sites *(CDN)* • **enter deep**: push CDN servers deep into many access networks • close to users • used by Akamai, 1700 locations • **bring home**: smaller number (10’s) of larger clusters near (but not within) access networks • used by Limelight Content Distribution Networks (CDNs) - CDN: stores copies of content at CDN nodes - e.g. Netflix stores copies of “House of Card” - subscriber requests content from CDN - directed to nearby copy, retrieves content - may choose different copy if network path congested Chapter 2: summary our study of network apps now complete! - application architectures - client-server - P2P - application service requirements: - reliability, bandwidth, delay - Internet transport service model - connection-oriented, reliable: TCP - unreliable, datagrams: UDP - specific protocols: - HTTP - SMTP, POP, IMAP - DNS - P2P: BitTorrent - video streaming, CDNs - socket programming: TCP, UDP sockets Chapter 2: summary *most importantly: learned about protocols!* - typical request/reply message exchange: - client requests info or service - server responds with data, status code - message formats: - *headers*: fields giving info about data - *data*: info(payload) being communicated **important themes:** - control vs. messages - in-band, out-of-band - centralized vs. decentralized - stateless vs. stateful - reliable vs. unreliable message transfer - “complexity at network edge”
{"Source-Url": "https://courses.engr.illinois.edu/ece438/fa2017/Lectures/ECE438_FA17_2_Application.pdf", "len_cl100k_base": 11866, "olmocr-version": "0.1.53", "pdf-total-pages": 125, "total-fallback-pages": 0, "total-input-tokens": 202011, "total-output-tokens": 16323, "length": "2e13", "weborganizer": {"__label__adult": 0.0005049705505371094, "__label__art_design": 0.000507354736328125, "__label__crime_law": 0.0005154609680175781, "__label__education_jobs": 0.00910186767578125, "__label__entertainment": 0.0005369186401367188, "__label__fashion_beauty": 0.00020992755889892575, "__label__finance_business": 0.0007762908935546875, "__label__food_dining": 0.00045013427734375, "__label__games": 0.0023059844970703125, "__label__hardware": 0.00653839111328125, "__label__health": 0.0006227493286132812, "__label__history": 0.000835418701171875, "__label__home_hobbies": 0.00017511844635009766, "__label__industrial": 0.0006585121154785156, "__label__literature": 0.0009641647338867188, "__label__politics": 0.0002932548522949219, "__label__religion": 0.0005593299865722656, "__label__science_tech": 0.36865234375, "__label__social_life": 0.00030732154846191406, "__label__software": 0.116943359375, "__label__software_dev": 0.486572265625, "__label__sports_fitness": 0.0004379749298095703, "__label__transportation": 0.0012292861938476562, "__label__travel": 0.00036525726318359375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 46288, 0.02078]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 46288, 0.35533]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 46288, 0.75521]], "google_gemma-3-12b-it_contains_pii": [[0, 21, false], [21, 105, null], [105, 500, null], [500, 724, null], [724, 1074, null], [1074, 1184, null], [1184, 1441, null], [1441, 1705, null], [1705, 1969, null], [1969, 2319, null], [2319, 2736, null], [2736, 2805, null], [2805, 3270, null], [3270, 3520, null], [3520, 4078, null], [4078, 4480, null], [4480, 5006, null], [5006, 5729, null], [5729, 6687, null], [6687, 7319, null], [7319, 8224, null], [8224, 8448, null], [8448, 8632, null], [8632, 9088, null], [9088, 9540, null], [9540, 9977, null], [9977, 10301, null], [10301, 10922, null], [10922, 11633, null], [11633, 12126, null], [12126, 12725, null], [12725, 13184, null], [13184, 13800, null], [13800, 14024, null], [14024, 14389, null], [14389, 14774, null], [14774, 15341, null], [15341, 15619, null], [15619, 16363, null], [16363, 16733, null], [16733, 17009, null], [17009, 17116, null], [17116, 17212, null], [17212, 17341, null], [17341, 17772, null], [17772, 17957, null], [17957, 18211, null], [18211, 18472, null], [18472, 18928, null], [18928, 19374, null], [19374, 19998, null], [19998, 20474, null], [20474, 20906, null], [20906, 21308, null], [21308, 21611, null], [21611, 22024, null], [22024, 22462, null], [22462, 22926, null], [22926, 23377, null], [23377, 23911, null], [23911, 24516, null], [24516, 24740, null], [24740, 25023, null], [25023, 25306, null], [25306, 25701, null], [25701, 26110, null], [26110, 26556, null], [26556, 26951, null], [26951, 27229, null], [27229, 27567, null], [27567, 27791, null], [27791, 28384, null], [28384, 28757, null], [28757, 29046, null], [29046, 29352, null], [29352, 29811, null], [29811, 30176, null], [30176, 30396, null], [30396, 30547, null], [30547, 31036, null], [31036, 31550, null], [31550, 32103, null], [32103, 32532, null], [32532, 33027, null], [33027, 33251, null], [33251, 33482, null], [33482, 33665, null], [33665, 34177, null], [34177, 34755, null], [34755, 34877, null], [34877, 35216, null], [35216, 35675, null], [35675, 36317, null], [36317, 36543, null], [36543, 36650, null], [36650, 37186, null], [37186, 37865, null], [37865, 38245, null], [38245, 38617, null], [38617, 38730, null], [38730, 38862, null], [38862, 39154, null], [39154, 39451, null], [39451, 39913, null], [39913, 40258, null], [40258, 40482, null], [40482, 40976, null], [40976, 41626, null], [41626, 42194, null], [42194, 42303, null], [42303, 42359, null], [42359, 42465, null], [42465, 42652, null], [42652, 42925, null], [42925, 42985, null], [42985, 43113, null], [43113, 43173, null], [43173, 43244, null], [43244, 43772, null], [43772, 44180, null], [44180, 44573, null], [44573, 45083, null], [45083, 45359, null], [45359, 45791, null], [45791, 46288, null]], "google_gemma-3-12b-it_is_public_document": [[0, 21, true], [21, 105, null], [105, 500, null], [500, 724, null], [724, 1074, null], [1074, 1184, null], [1184, 1441, null], [1441, 1705, null], [1705, 1969, null], [1969, 2319, null], [2319, 2736, null], [2736, 2805, null], [2805, 3270, null], [3270, 3520, null], [3520, 4078, null], [4078, 4480, null], [4480, 5006, null], [5006, 5729, null], [5729, 6687, null], [6687, 7319, null], [7319, 8224, null], [8224, 8448, null], [8448, 8632, null], [8632, 9088, null], [9088, 9540, null], [9540, 9977, null], [9977, 10301, null], [10301, 10922, null], [10922, 11633, null], [11633, 12126, null], [12126, 12725, null], [12725, 13184, null], [13184, 13800, null], [13800, 14024, null], [14024, 14389, null], [14389, 14774, null], [14774, 15341, null], [15341, 15619, null], [15619, 16363, null], [16363, 16733, null], [16733, 17009, null], [17009, 17116, null], [17116, 17212, null], [17212, 17341, null], [17341, 17772, null], [17772, 17957, null], [17957, 18211, null], [18211, 18472, null], [18472, 18928, null], [18928, 19374, null], [19374, 19998, null], [19998, 20474, null], [20474, 20906, null], [20906, 21308, null], [21308, 21611, null], [21611, 22024, null], [22024, 22462, null], [22462, 22926, null], [22926, 23377, null], [23377, 23911, null], [23911, 24516, null], [24516, 24740, null], [24740, 25023, null], [25023, 25306, null], [25306, 25701, null], [25701, 26110, null], [26110, 26556, null], [26556, 26951, null], [26951, 27229, null], [27229, 27567, null], [27567, 27791, null], [27791, 28384, null], [28384, 28757, null], [28757, 29046, null], [29046, 29352, null], [29352, 29811, null], [29811, 30176, null], [30176, 30396, null], [30396, 30547, null], [30547, 31036, null], [31036, 31550, null], [31550, 32103, null], [32103, 32532, null], [32532, 33027, null], [33027, 33251, null], [33251, 33482, null], [33482, 33665, null], [33665, 34177, null], [34177, 34755, null], [34755, 34877, null], [34877, 35216, null], [35216, 35675, null], [35675, 36317, null], [36317, 36543, null], [36543, 36650, null], [36650, 37186, null], [37186, 37865, null], [37865, 38245, null], [38245, 38617, null], [38617, 38730, null], [38730, 38862, null], [38862, 39154, null], [39154, 39451, null], [39451, 39913, null], [39913, 40258, null], [40258, 40482, null], [40482, 40976, null], [40976, 41626, null], [41626, 42194, null], [42194, 42303, null], [42303, 42359, null], [42359, 42465, null], [42465, 42652, null], [42652, 42925, null], [42925, 42985, null], [42985, 43113, null], [43113, 43173, null], [43173, 43244, null], [43244, 43772, null], [43772, 44180, null], [44180, 44573, null], [44573, 45083, null], [45083, 45359, null], [45359, 45791, null], [45791, 46288, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 46288, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, true], [5000, 46288, null]], "pdf_page_numbers": [[0, 21, 1], [21, 105, 2], [105, 500, 3], [500, 724, 4], [724, 1074, 5], [1074, 1184, 6], [1184, 1441, 7], [1441, 1705, 8], [1705, 1969, 9], [1969, 2319, 10], [2319, 2736, 11], [2736, 2805, 12], [2805, 3270, 13], [3270, 3520, 14], [3520, 4078, 15], [4078, 4480, 16], [4480, 5006, 17], [5006, 5729, 18], [5729, 6687, 19], [6687, 7319, 20], [7319, 8224, 21], [8224, 8448, 22], [8448, 8632, 23], [8632, 9088, 24], [9088, 9540, 25], [9540, 9977, 26], [9977, 10301, 27], [10301, 10922, 28], [10922, 11633, 29], [11633, 12126, 30], [12126, 12725, 31], [12725, 13184, 32], [13184, 13800, 33], [13800, 14024, 34], [14024, 14389, 35], [14389, 14774, 36], [14774, 15341, 37], [15341, 15619, 38], [15619, 16363, 39], [16363, 16733, 40], [16733, 17009, 41], [17009, 17116, 42], [17116, 17212, 43], [17212, 17341, 44], [17341, 17772, 45], [17772, 17957, 46], [17957, 18211, 47], [18211, 18472, 48], [18472, 18928, 49], [18928, 19374, 50], [19374, 19998, 51], [19998, 20474, 52], [20474, 20906, 53], [20906, 21308, 54], [21308, 21611, 55], [21611, 22024, 56], [22024, 22462, 57], [22462, 22926, 58], [22926, 23377, 59], [23377, 23911, 60], [23911, 24516, 61], [24516, 24740, 62], [24740, 25023, 63], [25023, 25306, 64], [25306, 25701, 65], [25701, 26110, 66], [26110, 26556, 67], [26556, 26951, 68], [26951, 27229, 69], [27229, 27567, 70], [27567, 27791, 71], [27791, 28384, 72], [28384, 28757, 73], [28757, 29046, 74], [29046, 29352, 75], [29352, 29811, 76], [29811, 30176, 77], [30176, 30396, 78], [30396, 30547, 79], [30547, 31036, 80], [31036, 31550, 81], [31550, 32103, 82], [32103, 32532, 83], [32532, 33027, 84], [33027, 33251, 85], [33251, 33482, 86], [33482, 33665, 87], [33665, 34177, 88], [34177, 34755, 89], [34755, 34877, 90], [34877, 35216, 91], [35216, 35675, 92], [35675, 36317, 93], [36317, 36543, 94], [36543, 36650, 95], [36650, 37186, 96], [37186, 37865, 97], [37865, 38245, 98], [38245, 38617, 99], [38617, 38730, 100], [38730, 38862, 101], [38862, 39154, 102], [39154, 39451, 103], [39451, 39913, 104], [39913, 40258, 105], [40258, 40482, 106], [40482, 40976, 107], [40976, 41626, 108], [41626, 42194, 109], [42194, 42303, 110], [42303, 42359, 111], [42359, 42465, 112], [42465, 42652, 113], [42652, 42925, 114], [42925, 42985, 115], [42985, 43113, 116], [43113, 43173, 117], [43173, 43244, 118], [43244, 43772, 119], [43772, 44180, 120], [44180, 44573, 121], [44573, 45083, 122], [45083, 45359, 123], [45359, 45791, 124], [45791, 46288, 125]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 46288, 0.04758]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
1bde44e647df65a5b8f299849e97173c9410c08b
Metaphors of code Published in: THINKING SKILLS AND CREATIVITY DOI: 10.1016/j.tsc.2016.09.004 Published: 19/09/2016 Please cite the original version: This material is protected by copyright and other intellectual property rights, and duplication or sale of all or part of any of the repository collections is not permitted, except that material may be duplicated by you for your research use or educational purposes in electronic or print form. You must obtain permission for any other use. Electronic or print copies may not be offered, whether for sale or otherwise to anyone who is not an authorised user. Metaphors of code—Structuring and broadening the discussion on teaching children to code Tomi Dufva\textsuperscript{a,}\textsuperscript{*}, Mikko Dufva\textsuperscript{b} \textsuperscript{a} Aalto-University, School of Arts, Design and Architecture, Finland \textsuperscript{b} VTT Technical Research Centre of Finland Ltd, Tekniikankatu 1, Tampere, P.O. Box 1300, 33101 Tampere, Finland \begin{abstract} Digital technology has become embedded into our daily lives. Code is at the heart of this technology. The way code is perceived influences the way our everyday interaction with digital technologies is perceived: is it an objective exchange of ones and zeros, or a value-laden power struggle between white male programmers and those who think they are users, when they are, in fact, the product being sold. Understanding the nature of code thus enables the imagination and exploration of the present state and alternative future developments of digital technologies. A wider imagination is especially important for developing basic education so that it provides the capabilities for coping with these developments. Currently, the discussion has been mainly on the technical details of code. We study how to broaden this narrow view in order to support the design of more comprehensive and future-proof education around code and coding. We approach the concept of code through nine different metaphors from the existing literature on systems thinking and organisational studies. The metaphors we use are machine, organism, brain, flux and transformation, culture, political system, psychic prison, instrument of domination and carnival. We describe their epistemological backgrounds and give examples of how code is perceived through each of them. We then use the metaphors in order to suggest different complementary ways that ICT could be taught in schools. The metaphors illustrate different contexts and help to interpret the discussions related to developments in digital technologies such as free software movement, democratization of information and internet of things. They also help to identify the dominant views and the tensions between the views. We propose that the systematic use of metaphors described in this paper would be a useful tool for broadening and structuring the dialogue about teaching children to code. \end{abstract} © 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). 1. Introduction Digitality as a phenomenon defines our era. Digital technologies have secured their place in business and in social relations as well as in culture. Digital technologies affect society, but often these changes are taken as given, without broader discussion on the impacts and consequences (König \textit{et al.}, 1985). This is troubling, because digital technology functions in various positions in our society. For example, a high percentage of stock trading is done through trading algorithms with little human involvement. (\textit{Washington}, 2015; \textit{Steiner}, 2013). Modern cars carry so much digital technology they have been called "computers on wheels" (\textit{Foley Lardner LLP}, 2014; \textit{Hirsch}, 2015). Social media, essentially a digital phenomenon, has defined \textsuperscript{*} Corresponding author. E-mail addresses: tomi.dufva@aalto.fi (T. Dufva), mikko.dufva@vtt.fi (M. Dufva). http://dx.doi.org/10.1016/j.tsc.2016.09.004 1871-1871/© 2016 The Authors. Published by Elsevier Ltd. This is an open access article under the CC BY-NC-ND license (http://creativecommons.org/licenses/by-nc-nd/4.0/). new ways of interaction and has influenced culture. There is also evidence that digital technologies shape the way people think, by supporting, sharing and expanding people’s cognitive processes (Barzilai and Zohar, 2006). By digital technologies, we mean technologies that are based on digital signal processing, which can be reduced to a flow of ones and zeroes, and which usually utilize information networks to function. Digital technologies allowed for the rampant innovation and growth that started around the 1940s and are defined as the digital age (Ceruzzi, 2012). Digital technologies include all the technologies from smartphones and computers to automated manufacturing and decentralized communication protocols. Digitalization presents new challenges, that, in essence, call for an understanding of digital technologies. The so-called digital divide, that formerly implied the distinction between those who have access to the internet and to those who do not (Mehra, Merkel, & Bishop, 2004) can now be seen as the divide between those who understand digital technologies and those who do not. (For a historical view on ICT in education, see Wilson, Scalisé, & Gochyyev, 2015). Mark Warschauer points out that, in today’s society, the ability to access, adapt and create knowledge using information and communication technologies is critical to social inclusion (Warschauer, 2004). The access to digital resources, as well as the ease of use of those resources, has increased, but the understanding of the code has not kept the same pace. This can be seen, for example, within the digital natives discussion. Knowing how to use a tablet computer at the age of two does not mean that one understands the way the machine works or the code behind it. It does not even imply that one could learn to cope with the technology (Kupiainen, 2013). This can also be seen from Carita Kiili’s dissertation (Kiili, 2012) where she states that many young adults have problems assessing and evaluating search results in the net. In essence, digital technologies are a source of inequality, which is problematic given their ubiquity in modern society. Code is the heart of every digital technology and substantially shapes its behaviour. In this paper, we define code as a digital language with a set of assumptions about the users and the world. Code is used to create programs that control digital technologies, from automated factories to personal computers, and from connected home appliances to services providing social networking. Thus, code, in our working definition, refers to the principles and choices made, and is not restricted to any specific programming language. Coding is the act of writing code and building programs, which includes making implicit and explicit choices about the purpose, framing and scope of the program. The key motivation for this paper is that, because digital technologies are always programmed and are thus based on code, understanding code and the assumptions inherent in it is necessary for full participation in modern society. The code in digital technologies is not value-free, rather it widely reflects both conscious and subliminal values of the programmer, a software company or society’s understanding of good code. Digital technology’s operating models are not immutable laws of nature, but rather flexible models that are designed and controlled by humans (Lessig, 1999, 2009). Code does not reflect objective truth about the world. Instead, it constructs laws in the digital realm. Without understanding how these laws are formed, we are not able to fully participate in the discourse of our digital life (Giroux, 2011; Lessig, 2009, Rushkoff, 2010). Technology does not impinge upon us from the outside of society, but interweaves into our society in the same way as the political or economic system does, and is also dependent on these other systems, which can alter the way, or speed, of technological progress (König et al., 1985). Without including technology as a coherent part of societal discussion the effects of technology and its relations to other systems stay ambiguous. Furthermore discussion around the ramifications of technologies are crucial as technology has the tendency to convert social, scientific, governmental and human problems into technical problems (Williamson, 2015). We propose code literacy as a way to participate to the discussion around the effects of digital technologies on society. Code literacy does not directly allude to learning to program in the traditional sense, rather it implies the understanding of the code and its intentions and context. The notion of literacy illustrates the case: In the same way that not all literate individuals become authors, not all code-literate individuals become developers. Still, literate people have the necessary skills and the apprehension of reading and writing. Understanding code does not emerge naturally from lived experience, but has to be taught. The code used to form the present digital world, be it an operating system, software or stock-trading algorithm, is distinctly different from the everyday analogue tools, such as hammer, pen or paintbrush, used to form the material world. One example of this is the binary system of two alternate states, often represented as 1 and 0. Code is binary and, therefore, can be reduced to “yes or no” decisions. However, as Rushkoff argues, human lives are not binary and thus trying to represent them using these binary systems is problematic (Rushkoff, 2010). Learning to code and digital learning systems are deeply intertwined in political, societal and commercial structures (Williamson, 2015, 2016). We argue that current teaching about digital technologies, programming and code and the discussion around it does not take fully into account the societal and ethical dimensions of code. Thus, our goal in this paper is to broaden the discussion and propose a structure for understanding different views on code. To facilitate this, we describe nine metaphors of code based on four paradigms. Through the use of metaphors and their associated paradigms we wish to support a larger and more holistic view on code and digital technologies. This paper is structured as follows. After this introduction, in Section 2 we describe nine general metaphors that cover four common paradigms of social theory as well as different assumptions about the complexity of the world and the relations between stakeholders. In Section 3, we apply these metaphors to structuring the discussion around code and illustrating various viewpoints expressed about what code is and how it influences society. In Section 4, we focus specifically on education around code and coding, and suggest different views on teaching code. Section 5 concludes the paper. <table> <thead> <tr> <th>Assumptions about the nature of the world</th> <th>Assumptions about the values and interests of stakeholders</th> <th>Unitary</th> <th>Pluralist</th> <th>Coercive</th> </tr> </thead> <tbody> <tr> <td>Simple</td> <td>Machine Organism</td> <td>Culture</td> <td>Political system</td> <td>Psychic prison</td> </tr> <tr> <td>Complex</td> <td>Brain Flux and transformation</td> <td></td> <td></td> <td>Instrument of domination</td> </tr> <tr> <td></td> <td></td> <td></td> <td></td> <td>Carnival</td> </tr> </tbody> </table> Fig. 1. Nine metaphors categorised by their assumption of the complexity of the context or “system”, and the values and interests of stakeholders (Jackson, 2003). 2. Metaphors for structuring the discussion around code The language around concepts such as technology has been analysed before through methods such as discourse analysis and critical discourse analysis (Fairclough, 1995; Weiss and Wodak, 2006). Our analysis is based on this stream of qualitative analysis of the concepts used to describe a phenomenon. However, in this paper we use metaphors as the tool for analysing and structuring the discussion. Metaphors are a mechanism for describing, understanding and comparing abstract concepts, and can be defined as mappings across conceptual domains (Lakoff, 2009). Through a metaphor, the entities in one domain are mapped onto entities in another domain. For example, a segment of code could be mapped to represent an organ in the human body. Metaphors can be powerful in influencing how an issue is approached or a problem is framed, but we are mostly unaware of their effect (Thibodeau and Boroditsky, 2011). Metaphors have been used in a systematic fashion in management and organisational studies (Jackson, 2007; Morgan, 2006). We use the metaphors introduced by Morgan (Morgan, 2006) and developed further by Jackson (Jackson and Keys, 1984). These nine metaphors describe different views on the concept of code and include the metaphors of machine, organism, brain, flux and transformation, culture, political system, psychic prison, instrument of domination and carnival. The nine metaphors are based on four common research approaches or paradigms in social theory: the functionalist, interpretive, emancipatory and postmodern (Jackson and Keys, 1984; Jackson, 2007) based on (Louis, Burrell, & Morgan, 1983) and (Alvesson and Deetz, 1996). Paradigm, in its original sense, means the set of ideas, assumptions and beliefs that shape and guide the scientific activity of a research community (Kuhn, 1970). The aim in the functionalist paradigm is to demonstrate law-like relations between objects. The emphasis is on function and efficiency. The functionalist paradigm is based on the assumption that an understanding can be gained through scientific method and empirical research. The interpretive paradigm, as the name suggests, is more interested in the interpretations people make of different issues and situations. These interpretations guide people’s behaviour. Thus, the aim is to understand these interpretations and the underlying culture through methods such as hermeneutics and ethnography. The emancipatory paradigm focuses on the power relations in society. It is aimed at “emancipating”, i.e. liberating and empowering people and unmasking domination through ideological and cultural critique. The postmodern paradigm is opposed to all three former paradigms, which it views as modernist. It critiques the attempt to form grand narratives and assuming rationality and direction. Its methods include deconstruction and genealogy. The metaphors can be structured along two dimensions (Jackson, 2003). The first considers the assumptions made about the world. The world can be seen as relative simple, meaning that the key issues are knowable, causal relations between the issues are straightforward and known, and goals are achievable by following a detailed plan. On the other hand, the world can be seen to be a complex, interconnected “mess”, where there are many surprises, unintended consequences, non-linear causal relations and, thus, the focus is more on adapting and “muddling through” than following a plan. The second dimension covers three different perceptions of the values and interests of the stakeholders: unitary, pluralist and coercive. Stakeholder values and opinions can be assumed to be unitary, meaning that the stakeholders tend to agree on a common goal and share a similar worldview. A pluralist view criticalises this as too simplistic, and assumes that there are multiple, competing goals and worldviews. A coercive view goes further and frames the stakeholder relations as a power struggle between those in power and those who are oppressed. Thus, there are multiple goals and worldviews, but not all are given voice. The metaphors can be positioned to a matrix using these two dimensions. (Fig. 1, see also the system of system methodologies by Jackson & Keys (1984) (Jackson, 2003). While Jackson (2003) uses metaphors to describe organisations, we argue that they can be used also to shed light on more general issues. We will next briefly describe the metaphors and then, in Section 3, use them to illustrate various views of code. The first four metaphors are based on the functionalist paradigm and view the values and interests of stakeholders, i.e. people who are influenced by code, as unitary and thus not problematic. The machine metaphor depicts issues as linear, mechanistic sequences from inputs to outputs and emphasises efficiency above all. The organism metaphor describes a Table 1 Nine metaphors for understanding the nature and purpose of code. <table> <thead> <tr> <th>Metaphor</th> <th>Description of code</th> <th>Purpose of code</th> <th>Example</th> </tr> </thead> <tbody> <tr> <td>Machine</td> <td>Code is a linear sequence of commands that is input to a machine</td> <td>To control a machine</td> <td>Algorithms, code listings</td> </tr> <tr> <td>Organism</td> <td>Code is a set of objects that represent different parts of a program</td> <td>To create functionality, to interact</td> <td>Object-oriented programming</td> </tr> <tr> <td>Brain</td> <td>Code is the intelligence of man-made systems</td> <td>To create new information, to learn</td> <td>Cloud computing, artificial intelligence</td> </tr> <tr> <td>Flux and transformation</td> <td>Code is the process that creates changes in man-made systems</td> <td>To create change, to create structure</td> <td>Software as life changer</td> </tr> <tr> <td>Culture</td> <td>Code is a way of thinking and understanding the world</td> <td>To connect and create a community</td> <td>Free software foundation, Hacker ethics, Hacker Culture</td> </tr> <tr> <td>Political system</td> <td>Code is a statement and a tool to shape the world</td> <td>To establish a new form of society</td> <td>Code as political construct, Internet</td> </tr> <tr> <td>Psychic prison</td> <td>Code is a system which requires people to adapt to it</td> <td>To shape people</td> <td>Filter bubble</td> </tr> <tr> <td>Instrument of domination</td> <td>Code is a tool for domination</td> <td>To control people</td> <td>Data as a source of power</td> </tr> <tr> <td>Carnival</td> <td>Code is a tool for domination for art and creativity</td> <td>To challenge existing mindsets, to open up discussion</td> <td>Creative coding.</td> </tr> </tbody> </table> The non-linear interaction between different parts and highlights the functional differences and roles of the parts. The brain metaphor, stemming from cybernetics, puts emphasis on learning and adaptation in a hierarchical system, while the flux and transformation focuses on the processes and logics of change. The culture and political system metaphors are based on the interpretive paradigm, which puts emphasis on the different interpretations that exist of an issue. The culture metaphor focuses on values, beliefs and worldviews, and thus highlights the community or communities around the issue. The political system metaphor also emphasises values and worldviews, but focuses more on the governance and decision-making around the issue. It thus highlights relevant institutions and political structures. The psychic prison and instrument of domination metaphors are based on the emancipatory paradigm. Similar to the interpretive paradigm, the assumption is that there are multiple differing worldviews, beliefs and values. However, now the focus is on the power relations between the worldviews and on bringing ignored or suppressed aspects and questions to the surface. The psychic prison metaphor focuses on the structures, both intentional and unintentional, that suppress individual freedom and learning. The instrument of domination metaphor focuses more on the group level and highlights how the issue is used as a way to control others. The final metaphor, carnival, is based on the postmodern paradigm, which seeks to question the way the issues are discussed and framed in general by deconstructing the main concepts. The carnival metaphor thus highlights the creative and chaotic side of an issue, in order to use the issue itself to question the way it is discussed. This may often result in a multi-faceted picture of the issue, which is not as coherent as in the other metaphors. Our purpose in describing and applying these metaphors is not to argue that one is better than the other, or that a certain view to an issue should be followed. Rather, our purpose is to use the metaphors to structure the discussion around code. The nine different views help to understand the discussions and decisions around code. In addition to giving a more comprehensive view of what code is, the different metaphors also highlight what is missing from the discussions and which views conflict with each other. We will return to these questions in the discussion section, after we have applied the nine metaphors in the next section. 3. Understanding code through metaphors In this chapter, we propose ways to define code through the different metaphors. We illustrate how code is defined and how it appears in the different metaphors. In Table 1, we provide a summary of these descriptions of code, views of the purpose of code as well as some examples. These results are elaborated below. 3.1. Functionalist paradigm The functionalist paradigm introduces a mechanical and unitary view of code. It focuses on the straightforward advancement of code as a technical invention. Inside the paradigm, four different metaphors present different nuances. As a whole, the functionalist paradigm can be marked as a dominant view: It predominantly acts as a common and shared understanding of the meaning of code. 3.1.1. **Machine: code as a mechanistic, linear sequence of commands** The machine metaphor represents the fundamental mechanical comprehension of code. Code is seen as a sequential set of instructions that are input into and processed by a machine: the computer. The results are then displayed to the user. In other words, the user expects that the computer as a machine will deliver her or him results based on a set of instructions – the code. From a technical perspective, the machine metaphor demonstrates the fundamental physics of code. Paul E. Ceruzzi calls this the digital paradigm – that all code, computation and control are done in binary form. With binary form, he not only refers to a binary arithmetic – the number system that uses just two symbols, 1 and 0 – but also to the use of binary logic that is used to control, encode and transmit the information (Ceruzzi, 2012). In essence, all digital information is based on the binary code. In the machine metaphor, computers, the machines that are able to process digital information, are basically input and output machines. They take instructions, process those instructions and output information based on the instructions. Code represents the set of instructions in the languages that the computers can understand. Computer languages vary from lower level languages to higher level languages. Lower level languages are closer to the binary logic that computers use on the implementation level, while more complex, higher level languages are easier for humans to write and read. Whatever the language is, in the end all of these languages are compiled back to a binary form. From the machine-metaphor view, the higher level languages can be seen as rational progression towards getting the intended process completed faster and easier. Even though the code in higher level languages is farther from the binary code, being closer to the language humans use increases efficiency through a manageable working environment and less friction in the process. Many modern compilers are generally more efficient in compacting the code to binary than are humans, resulting in a more robust code (Ceruzzi, 2012). Machine metaphor illustrates the straightforward process of digital technology – progress means creating ever more efficient machines to interpret increasingly complex code. The machine metaphor represents a reductionist viewpoint and a hierarchical way of processing data. Tasks are broken into parts and processed in a strict order governed by the rules of the program – the code. This assumes that the context is simple and can be reduced to separate parts, and that a single common goal exists. Seeing code only through this metaphor results in an emphasis on the process without questioning the direction, which, furthermore, often results in advocacy of a single way of coding without embracing possible diversity of goals and processes. In the context of planning education, this could mean a debate on which coding language should be taught, but not questioning what the purpose of teaching the coding language is in the first place. The underlying rationale behind such a debate is that coding is a skill for the job market and teaching coding – the right language and style – is thus good for ensuring the employability of future workforce. 3.1.2. **Organism: code as a combination of objects** The organism metaphor sees the code as a construct of many individual parts that work together. This can be seen as a continuation of the machine metaphor, as it focuses further on increasing the efficiency of code by further breaking the code into more manageable parts, thus allowing programmers easier ways to reaching their goals (Petzold, 1999). The organism metaphor represents another common mechanical view of the code. It can also give us an idea of how modern code is created and how software problems are addressed – code is not seen as a simple set of instructions but as a structured sets of code, organs, that together create a working program, or a body. On a technical level, the organism metaphor corresponds to object-oriented programming (Cox, 1985). Object oriented programming breaks the linear set of instructions to different objects that can be addressed when necessary. Most modern programming languages favour this approach as it allows for a more structured management of complex code that makes problem solving easier, thus increasing efficiency (Petzold, 1999). Furthermore, the organism metaphor represents a structural approach, which allows the creation of more flexible code that can interact simultaneously to multiple inputs and outputs. Coding is still seen as a mechanic practice of giving instructions, but the linearity of the instructions is broken into interconnected parts. Object-oriented thinking and problem solving are at the heart of modern coding. Many commonly used higher level programming languages incorporate object-oriented thinking. As such, object-oriented thinking and problem solving break the traditional narrative and sequential ways of thinking and understanding (Manovich, 1999). 3.1.3. **Brain: code is intelligence** In the brain, metaphor code is not only sets of organized instructions, but represents the intelligence of computers. Code is seen as the man-made brain: intelligence that not only structures information, but also creates and modifies it. Code is the central unit that processes and develops information in the system, be it software, computer or any other machine. One example of seeing code through the brain metaphor is the notion of artificial intelligence. Artificial Intelligence (AI) is the study of how to build or program computers to enable them to do what minds can do (Boden, 1996). The idea of artificial intelligence has captivated many past and present thinkers long before digital technologies existed (McCorduck, 2004). Modern programmable computers can be seen as the manifestation of the idea of artificial intelligence – before computers, machines were built for a specific task and purpose (Ceruzzi, 2012). The idea of a general device, the purpose of which could be changed indefinitely by programming, was revolutionary. A similar idea of programming and reprogramming fuels the current developments in artificial intelligence – pattern recognition, computational learning theory and machine learning. stem from the idea that the code inside the computer can change, or, loaning a biological term, it can evolve (Chrisley and Begeer, 2000). The ultimate extreme in artificial intelligence is technological singularity in which artificial intelligence has progressed beyond human intelligence and becomes sentient through code (Kurzweil, 2005; Lanier, 2010). Through the brain metaphor, this development is seen as natural and desirable; the metaphor contains no problematization or critique. Code only actualizes the potential and predetermined ultimate goal of digitality. In technological singularity, code truly becomes the brains of the computer. The brain metaphor is naturally not limited to the discussion of artificial intelligence. We can also look at other systems of code through the brain metaphor. It extends the functionalist paradigm further, from lists and objects to a system with a central controller who has the authority to control and modify the code. A good example is cloud computing, where the machines running the code become secondary. Even though the code is running on physical computers, the physical location is irrelevant. Code is seen to escape the hardware and have a life of its own in the cloud of digital computing power. In a similar way, modern digital voice-controlled assistants aim to create the illusion of an omniscient virtual entity and can thus be seen to represent code in its abstract form. They seem to exist beyond the machinery running them. 3.1.4. Flux and transformation: code will save the world The Flux & Transformation metaphor is similar to the brain metaphor, as it also concentrates on the development of the code, but, rather than framing code as the intelligence of machines, it sees code as a transformative tool to continually change the world. It therefore broadens the focus from computers and code to their environment. It can bring into focus the aspiration many software companies share, at least in their public declarations, which is not just to create better products, but to make the world a better place. From Google’s “Do no evil”-slogan to Facebook’s CEO Mark Zuckerberg, who argues that his company’s mission is to “make the world more open and connected” (Mark Zuckerberg, Sarah Lacy Interview Video, 2008), software companies are focusing on solving problems rather than creating products. As Jeff Jarvis has said, “Complexity is a solvable problem in the right hands” (Jarvis, 2012). Code is seen as a medium that is both flexible and can be deployed rapidly and widely. It only takes one person and a few nights to come up with a solution that has the possibility to change or disrupt the way we see the world. The Flux and Transformation metaphor thus moves the focus from the advancement of efficient code to code’s ability to advance our lives. The metaphor is firmly grounded in the functionalist paradigm, and focuses on how to create a change rather than on the question of why change is needed, what the direction should be and who gets to decide the direction. Thus, it does not problematize the act of making the world a better place. The problems are seen as simple, straightforward tasks that can be solved with code. 3.2. Interpretive paradigm Whereas the functional paradigm and the last four metaphors saw the code as a fairly straightforward issue that mainly concerns technical aspects and implementations, the interpretive paradigm has greater interest in the different ways of seeing and understanding code. In contrast to the unitary perspective of functionalism, the interpretive paradigm takes into account the plurality of stakeholder values and opinions in the context in which the code is created and deployed. 3.2.1. Culture: code creating communities The Culture metaphor focuses on the communal aspects of code, for example on what kind of communities and subcultures are formed around code and coding, and what kinds of values are projected to code. The popularisation of digital technology has led to a whole industry that has created its ways of working and communicating as well as its ethical rules, which are reflected in the way code is perceived and treated. The culture is not unambiguous; rather it consists of many sub-cultures and ideologies. The Culture metaphor brings into focus the ways code affects how the surrounding environment – the world – is interpreted. One example of this is the free software movement. The movement has a long creation history dating back to the early phases of computers. Before personal computers, computers were mainly used in corporations, universities and research laboratories. Most of the operating systems were open. Anyone could read and modify the way operating systems worked. When the industry began to grow, especially into businesses and households, and the operating systems evolved, many manufacturers started closing their code, thus preventing collaboration and modification. For some, this development went against their basic rights and values as programmers. On this basis, Richard Stallman, then working for the Artificial Intelligence Lab at MIT (Stallman, Gay, & Lessig, 2009), created the GNU project (Fsf, 2015a), on which Linus Torvalds later built his free operating system, Linux. A few years after starting the GNU project, Stallman founded the Free Software Foundation (FSF) (Fsf, 2015b). These projects can be seen as a wish to maintain the academic ethos and collaboration as well as the hacker culture alive in the developer culture (Stallman et al., 2009). The stated goal for these projects is societal change. FSF wants to change the way we use, distribute and think about code. At the core of FSF are four rights that, according to FSF, are essential in keeping the development and use of code democratic: The freedom to run the program as you wish, for any purpose (freedom 0). The freedom to study how the program works, and change it, so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this. The freedom to redistribute copies so you can help your neighbor (freedom 2). The freedom to distribute copies of your modified versions to others (freedom 3). By doing this, you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this (FSF, 2015a). These rights align with hacker culture, which, at the time Stallman founded the foundation had different connotations than the word “hacker” has now. Hacker was a positive concept rather than depicting a coder with a criminal aptitude. Hacker culture believes in free access, freedom of information, and improvement to the quality of life (by using digital technologies) (Levy, 2010). Even though the aims of FSF are political and ideological, it also reveals the richness in the developer culture, with its core beliefs, tradition and ethics. As Coleman (2012) says in her book Coding Freedom, The ethics and aesthetics of hacking (Gillen, 2013), there is a common pride and joy in offering your “handmade” code to others, as well as the genuine interest in learning from other developers’ code. As the examples above illustrate, inspecting coding from the perspective of the cultural metaphor reveals the rich and many-sided culture of code and reveals that coders sit simultaneously at the centre and at the margins of the liberal tradition (Gillen, 2013). Code both creates many sub-cultures and at the same time affects the general culture. Thus, code and coding is not only about giving instructions to a machine or solving problems, but also about influencing the culture. 3.2.2. Political system: code structuring the society The other metaphor in the interpretive paradigm, political system, offers a somewhat different view from the culture metaphor. Whereas the culture metaphor sees the world from the individual and grassroots perspective, the political system metaphor takes a look at how code creates hierarchical systems that affect our everyday lives. Besides influencing its culture, code also affects society in a more systematic manner. The way our coded environments are built, as well as the way code itself is built, constructs the ways we act in the world. From operating systems and programs to protocols that hold the constructions together, the many ways we interact in the world are channelled through the code. “Code is law”, as Harvard lawyer and author of Code and other laws of cyberspace (Lessig, 1999, 2009) Lawrence Lessig puts it. In the political system, metaphor code is seen as not mere mechanical technology, but a malleable force that can be changed by the culture that developers live in, as well through governmental or any other institutional control. One example of this is the internet, as it offers us a multi-faceted view of how political systems affect the way code is structured. Born out of research projects in the US defence department, the internet spread to universities and from there to the public. In the beginning, the internet was seen as a revolutionary medium that allowed every participant to not only receive, but to send information (Lessig, 2009), thus enabling a ‘real’ democratic process. The internet was seen as free by its nature, offering equal opportunities to everyone (Fleischer et al., 2014, Lessig, 2009). A quote from MIT professor Dave Clark’s 1992 speech at the IETF (Internet Engineering Task Force) conference depicts the ethos well: “We reject: kings, presidents, and voting. We e believe in: rough consensus and running code.” (Borsook, 1995) But as Lessig wrote already in 1999, The internet has no nature per se, but is dependent on our choices: ‘We can build, or architect, or code cyberspace to protect values that we believe are fundamental, or we can build, or architect or code cyberspace to allow those to disappear. . . . There is no choice that does not include some kind of building. Code is never found; it is only ever made, and only ever made by us’ (Lessig, 1999). Sixteen years later, the structure of the internet has been changed considerably through the actions of several different sources. When Lessig was writing the first revision of the Code and other laws of cyberspace, the current topic was free mp3-downloads and the music industry’s reaction to it, leading to digital rights management (DRM) and legislation. At about the same time, China was waking up to the threats that the internet, as a source of non-controlled information might impose to its governance, causing it to erect the “Great Firewall of China”, a project that aims to manage all the net communication in and out of China (University of California-Davis, 2007). And a few years ago Edward Snowden revealed the widespread internet surveillance that governments were engaged in, thus displaying yet another layer of the internet and what has been made possible through code. As Mikael Brunila proposes, the internet has enabled panspectric control, which alludes to the way information can be gathered from the internet. In traditional panoptic control, information is gathered from the suspects after they actually become suspects; in panspectric control, everything is collected, all the time, and from everyone (Fleischer et al., 2014). These kind of structural changes in societal architecture give us a glimpse of the reach code has. The internet is a multi-layered construction of code, which is inherently intertwined with political systems. Code is not free from these ties, but rather has a decisive role in creating the architectures we use every day. The questions of how to control code, who can control code and why would we control it are increasingly more relevant in our lives, as code permeates more and more of our everyday activities via the internet-based services, but also through increasingly “smart” gadgets. 3.3. Emancipatory paradigm Many of the issues that arise in the interpretive paradigm can also be seen as issues in the emancipatory paradigm, and vice versa. The difference comes from the focus on power relations. In the interpretive paradigm, there are differing views on the purpose and goals related to code, but the differences between these views are assumed to be somewhat unproblematic. We can examine the different views that code offers to culture and politics. In contrast, the metaphors in the emancipatory paradigm focus more on what the power relationship is between these various views, and how these power relations are reflected or enacted through code. For example, does code enable or restrict emancipation both at the individual and societal level? 3.3.1. Psychic prison: code restricting human behaviour The psychic prison metaphor takes a look at the power relations from the individual perspective. It brings into focus the code that underlies technological inventions from the emancipatory perspective. Is a code good for an individual? Does this code help an individual accomplish the things she wants to do? How does the architecture of code influence the life of an individual? One example of this is what Eli Pariser calls the filter bubble (Pariser, 2012), meaning the possible outcome that may result from using invisible automatic personalisation algorithms. The algorithms are invisible in the sense that an individual does not choose to use them, nor sees them. Rather, she has opted into them automatically when using certain services. One example Pariser gives is the difference in results people get by doing the same Google search. Using the same search words yields different results, based on dozens of different signals Google collects from the user. (Pariser, 2012) A quote from Mark Zuckerberg, CEO of Facebook illustrates the idea further: “A squirrel dying in your front yard may be more relevant to your interests right now than people dying in Africa.” As Parisier says “Your filter bubble is your own personal, unique universe of information that you live in online. And what’s in your filter bubble depends on who you are, and it depends on what you do. But the thing is that you don’t decide what gets in. And more importantly, you don’t actually see what gets edited out.” The idea of the filter bubble shows the potential problems caused by code that is selecting content from the internet unbeknownst to the user. Having no control over this code creates an unequal situation between the user and the code. On what basis does the code select what content is shown and what is hidden? What are the bases of the code selecting the showable content? And what are the motivations of the developer who decided these rules embedded in the code? Are the rules decided with the user’s assumed benefit in mind, or are they defined to benefit the business that the developer is in? On a more abstract level, the psychic prison metaphor also focuses on the issue of how we might knowingly or unconsciously change ourselves because of code. For example, MIT professor Sherry Turkle talks about the ways we require digital devices to actualize our feelings. She gives an example of her study where she concluded that some teenagers require the passing of text messages to truly justify and experience their feelings, like falling in love or being scared (Turkle, 2011). Another point Turkle, along with many others such as Jaron Lanier (Lanier, 2010) and Douglas Rushkoff (Rushkoff, 2010, 2013) bring up, is the alienation that code allows us to feel. Turkle speaks about the feeling of “alone together” where we are physically in one place with other people, but mentally somewhere else (Turkle, 2011). Another example of this abstract level is obsessive gaming. How does the code in the games take into account the player and their needs? Is the code made in a responsible way or does it use tricks to hook the player into spending more time or money on the game? The psychic prison metaphor highlights how the power relationship between individual and code is problematic. The ways code changes us may not always be for the good. As Jaron Lanier asks, do coded environments change people, or do people change themselves because of them? Lanier’s point is that, in order to use, enjoy or respect code, humans can adjust to many levels of intelligence. Sometimes, code requires us to be less intelligent than we really are (Lanier, 2010). Self-control is required in order to break free from the psychic prison. Both Lanier and Turkle use the term dieting. In a similar vein, Pariser is concerned that the filter bubble might feed us too much of the information we enjoy and too little of the information we need, and uses the term “information junk food” (Pariser, 2012). Turkle asks for a digital diet: a reflective and introspective review of what and how we want to use our devices (Turkle, 2011). The psychic prison metaphor enables the exploration of the ways code might limit or shape the current and future potential of humans. 3.3.2. Instrument of domination: knowledge and control of code is power The instrument of domination metaphor focuses on the power relations between societal and communal constructs and code. Code is seen as a force that is used intentionally in order to shape and control others. The metaphor concentrates on those aspects of code that may enable some group to dominate another group in ways that might not have been possible or feasible before. In other words, does the architecture of code have an aptitude to cause inequality? If that is the case, then those who understand and have access to code have more power than those that do not. Because of the widespread nature of code, these issues are not just marginal questions. Code is not just at the heart of computer screens or smart phones, but affects a wide variety of things from pacemakers to cars and manufacturing units, offering unforeseen access to the everyday lives of humans. For example, if computer browsers can transfer so much information to Google that it can confidently personalise our search results, how much more does the mobile phone with its sensors and location data add to this information? Or, what about our payment data collected from credit card purchases and ewallets and the increasing popularisation of the internet of things? If all the data from house temperature and the efficiency of a person’s habits of recycling to their history of payments are funneled to one or a few institutions or corporations, does it not create the possibilities for domination? In a similar way, the invisibility of code in the filter bubble creates problematic situations, as does the invisible and closed collection of data to both individuals and to society as a whole (Morozov, 2013). Collection of data is problematic because of the lack of democratic availability of the data. Most of the information collection is done by large tech companies that keep the information to themselves or only sell it to other businesses (Fleischer et al., 2014). The problems of domination are not just limited between tech companies and users, but the relationship can be seen in different scenarios. When more devices get both transformed into code and connected to networks, new opportunities arise for misuse. For example, modern cars can be thought to be computer servers on wheels (Vallance, 2015), and when they get connected to outside networks they can also be hacked and remotely controlled, as two new studies demonstrate (Checkoway et al., 2015; Vallance, 2015). Being able to take almost full control of any network-connected car from the comfort of your sofa, using just your computer and mobile phone exemplifies the significance of domination by code very well. Other more well-known examples are the privacy breaches that Edward Snowden revealed. The widespread nature of how governments spy on citizens illustrates the reach that digital devices and code have in our lives. Without acknowledgment, we are giving up information about our lives that we did not even know about before. One important angle on the massive data collection is that it is impossible to collect or manage that amount of information without code, thus increasing the dependency we have on code. The increase is not just in the pure processing power, but even more in the capabilities of evaluation of the information. Also, this processing power is more reachable by those that have more assets and time, creating an imbalance that is further increased by the lock-in effects, common in digital technologies (Lanier, 2010) (Morozov, 2013) (Rushkoff, 2010). The imbalance is further increased by the prevalent proprietary nature of the code (Stallman et al., 2009; Vaden, 2005). Yet, even if code allows for new kinds of domination, and may be biased towards those who have more assets, it does also enable rebelling against those currently in power. The construction of code allows for clever individuals to use it for their own purposes. For example, hackers in China or in the Arab world during the Arab spring or in other countries that suppress freedom of speech can benefit from code architecture by tunnelling messages securely to the outside world, passing governmental restriction and walls. In the instrument of domination metaphor, code can be seen as architecture that allows more multi-layered ways of domination, and is both the instrument and the product of power relations. 3.4. Postmodern paradigm Functionalist, interpretive and emancipatory paradigms provide different views of what code is. The postmodern paradigm provides a “meta-view” and focuses on the mechanisms through which we create these views. Essential questions in this paradigm are how do we see code, what influences our perception of code and what other ways could there be? The emphasis is thus on deconstructing the process of giving meaning to what code is. 3.4.1. Carnival: understanding of code can be created through creative use of code To illustrate how the concept of code can be approached in the postmodern paradigm, we employ the metaphor of a carnival. In the carnival metaphor, many perceptions can exist at the same time and playfulness, suspension of disbelief and multi-facetedness is embraced. The carnival metaphor focuses on the creative and artistic sides of code. It illustrates how code can inspire people and evoke various emotions. It also helps to explore the different reactions people have expressed towards code. However, the carnival metaphor does not fully reflect all the aspects of the postmodern paradigm and the endeavour to deconstruct the meaning and sense of code. Art and creativity can be seen as ways of deconstruction but they are not the only ways to do this, nor can we say that they are only views into the multiple nature of postmodern. Jackson (2007) uses also the metaphor of broken mirror to reflect the change from one solid picture into various differing pictures of the whole. A good example of the understanding of code in the carnival metaphor is creative coding, which concentrates on the expressive rather than functional sides of code. Creative coding has its origins in the 1960s, when artists first began to experiment with computers. In recent decades, creative coding has seen an upheaval along with several tools aimed at the creative professionals. “Creative code may sound like an oxymoron, but as in many technical processes in the art studio, creativity may emerge once rules are learned and then broken (Knochel & Patton, 2015).” Creative coding allows artists to question and critique code and, at the same time, express themselves through code. In a similar way that a brush or a pen is a tool for visual artist, code can be seen and used as an artistic instrument. Code, like any instrument has its own biases and ways of working, creating a medium that allows things to be expressed in unique ways. As Cox says in his book Speaking Code: “Code, like language in general, evokes complex processes by which multiple voices can be expressed, modified, and further developed” (Cox, 2013, p.6) One example of creative coding is “Smile TV”, a project by David Hedberg. “Smile TV” is a simple TV-set, but it only works when the viewer is smiling, thus creating a real working product using modern technologies and at the same time critiquing digital culture (Scholz, 2014). The works in creative code are diverse, where some focus on the visual effects or on visualisation of data, such as Jer Thorp’s works (Thorp, 2009). And some use digital technologies to reveal hidden layers in these techniques, such as the Immaterials project that materialises the existence of GPS-signals (Arnall, 2014) and Wifi signals (Arnall, 2011). As the examples indicate, creative coding comments on the views of code expressed within multiple paradigms and metaphors. Whereas some works can take a functionalist angle and use code in an almost similar way when developing “working” software, some may misuse and break the workings of code altogether. And still others may use code as a way to critique the power issues arising from the code. As such, the world around creative code is ambiguous and multi-faceted. Creative coding illustrates how the carnival metaphor incorporates various views captured in other metaphors, joins them together and deconstructs them. Like many art works, the carnival metaphor focuses more on the experience than the theory. The art created does not justify its presence, but rather waits to be experienced. As such, it can show us those sides of code that may not be otherwise understood, or seen. In this section, we have described different perceptions of code through the use of nine metaphors. In order to illustrate how these metaphors can be used to structure and inform a topical issue, we apply them to the ongoing discussion about teaching programming in schools. 4. Applying the metaphors of code to developing education around code and coding Teaching programming has lately been a much discussed subject in education. Finland along with many countries, such as Estonia, the UK and the US have started or are starting to incorporate programming in the basic curriculum in schools (Halinen, 2014; Sterling, 2015). Our research is mainly focused on the discussion, decisions and development of teaching programming in Finland, although it can be seen to echo similar tendencies in other countries such as UK (For example see Williamson, 2015). When the teaching of programming moves from the level of higher education to the level of basic education, the understanding of programming becomes increasingly important: does the basic curriculum just prepare younger students for the digital industry as a possible workforce, or does it offer educational views on the complex issues around widespread digital technology? This problematic is cumulative, as teachers are often unclear of the intended aims and goals of teaching programming (Pollari, 2014). The discussion around code is often limited to methods of teaching programming, such as different platforms etc., and to which programming language would be best in programming. In some cases, code is also seen as part of art and craft, such as in Finland, where teaching programming is going to be divided between maths and craft lessons (Opetushallitus, 2014). In general, the views around teaching code are fairly limited and mechanical. Even though critique towards technological determinism has been expressed, the idea that technology acts as independent and often objective force is still often taken as granted. (König et al., 1985). Understanding the way code structures our daily interaction with machines and how it mediates our interaction with fellow humans (through digital services) is rarely seen as an essential societal skill. Rather, the code underlying the interfaces and services we use is taken as given. This limits students’ capability to identify and question the implicit assumptions about this code. From the stance of critical pedagogy, Paulo Freire asked even in the 1990s to find a policy on teaching technology (Freire, Freire, & De Oliveira, 2014). He acknowledged the increasing speed that technologies advance and how this creates life changes, and asks for “the quality of getting or creating ability to answer to different challenges with the same speed that things change. This is one of the demands of contemporary education. We need to form and not to train.” (Freire et al., 2014). In the previous section we applied nine metaphors to illustrate different perceptions of code and highlight various issues related to these perceptions. We now apply these metaphors to structure and broaden the discussion around teaching programming at the level of basic education. The most prevalent question that arises from applying the metaphors is about the objectivity of code and programming. Is code seen as an objective exchange of ones and zeroes, or is it a value-laden power struggle between white male programmers and those who think they are users when they are, in fact, the product being sold? The current dominant discussion emphasises more the objective, logical and mathematical sides of code as described by the functionalist paradigm and especially by machine and organism metaphors. Code is seen as an unproblematic language to be taught in order for the students to have a more secure employment. In the context of planning education, this could mean a debate on which coding language should be taught, but not questioning what the purpose of teaching the coding language is in the first place. The underlying rationale behind such a debate is that coding is a skill for the job market and teaching coding – the right language and style – is thus good for ensuring the employability of future workforce. The endeavour to improve education on learning to code can be seen as a large campaign where both political and economical actors lobby their interest through boundary organisations (Williamson, 2015).³ ³ Williamson’s research is focused on the “learning to code” endeavour in the UK, but there are similarities with the developments taken towards including coding to the basic curriculum in Finland (Saariketo, 2015). But if we assume that the world around us is more complex, this perception of code does not hold. The brain and the flux & transformation metaphors move the focus from the mechanical viewpoint and put emphasis on the intelligence of code. Code is not a simple language to be learned in order to ensure employment, but rather a complex man-made tool for shaping the world. In other words, code is seen as an instrument that creates and changes our everyday behaviour and practices. Artificial intelligence, as well as the solutionist attitude of many software firms, show the possibilities and reach code has. Code is everywhere in our lives. From this standpoint, merely choosing a programming language to be taught or creating basic logical understanding might not be enough. When learning and teaching code is understood more broadly, code can be more easily connected to real life situations. Thus students can have a more direct experience of the implications of the code. This can enable discussion in the classroom about the role of code in our society – a crucial discussion but one where there are no right answers. Here Freire’s idea of forming rather than training students becomes more clear. Freire sees that education has the responsibility to create digital minds. Training students to learn a programming language is not enough, as it does not form the students to understand the full reach of digital technologies, thus preventing them from creating knowledge themselves, i.e. possessing a critical mind (Freire et al., 2014). The ubiquitous nature of code leads to the question of whether we agree on how good or beneficial code is today? And furthermore, what do we mean by good or beneficial? These questions are essentially intertwined with public education’s aims to help students not only to live in society but to understand societal structures and ethics, and also to question them. The interpretive paradigm focuses on these questions and the way code influences society and culture. The culture metaphor affixes code to its cultural context, offering views on the different mindsets, ideologies and trends that influence the code. The culture metaphor explains the societal, cultural and subcultural contexts that affect the ways code is written, offering us ways to better experience the reasons why code exists the way it does. For example, understanding the ways free software, open source software and proprietary software differ from each other can offer ways to impact software development as well as to offer an understanding of the design choices in the software. Furthermore, the cultural metaphor can offer views of the historical context of code and digital technologies. Understanding the beginning of digitality, such as Babbage’s machine, Leibniz’s binary logic, or Ada Lovelace, the first computer programmer, might offer valuable connections that increase the student’s personal understanding of code. The metaphor of political system approaches much of the same area as the culture metaphor, but more from the societal standpoint. It addresses critical questions of the purposes and morals of code: What part does code play in the democratic system? The political system metaphor offers ways to approach subjects such as privacy, whistle-blowers, free software ideology or the structure and politics of the internet. It can also be expanded to the philosophies and history of technological invention, and to a discussion about technological determinism. Possible questions to be raised in this metaphor include how technology changes society, what are the relations between technology and society and does society or other aspects of society, such as political decisions or economical forces shape the way the code we use today is made? Ars Industrialis manifesto by French philosopher Bernard Stiegler might offer interesting starting points for classroom discussions about the role of code in society as they contrast technology’s role starkly as pharmacon: both the drug and remedy (Stiegler, 2005, 2010). The metaphor of code as a political system also offers more reflective viewpoints on the future of code, which might offer interesting talking points when contrasted with brain or flux and transformation metaphors. The interpretive paradigm emphasises the various perceptions about the background and the context for the code we use every day. This information can be beneficial for teachers as well as students to increase their understanding of the reach that code has. It can offer practical discussions on the reasons and implications of the software we use every day. It also offers the idea that code is not a fixed thing, but a malleable invention, which is affected by the coders, the culture around it as well as societal decisions and politics. This kind of critical understanding might be what Freire calls forming instead of training. The emancipatory paradigm further increases the humanistic viewpoints on the code. Code is seen not only as mechanical or societal, but as a force that has the power to affect and influence our lives. It questions the intentions of the code as well as our position in the coded world: Do people have the power to decide, or are they being manipulated? Is code made to be truly helpful for users, or is it created for the benefit of the coder or the company? The psychic prison metaphor considers these questions from the individual standpoint and the instrument of domination metaphor deals with the power struggle from a broader context. The psychic prison metaphor asks how people (students, teachers, parents) are influenced by the code and what are its ramifications. Do the coded environments change people, and if so, how? Or, as Jaron Lanier asks, Do we change ourselves because of them? (Lanier, 2010). How does the filter bubble affect learning or searching for information? How different can the coded environments be, for example, between teacher and students? How do we deal with the loss of common “neutral” media such as newspapers? Themes like obsessive gaming, social media usage, and critical, self-aware ways of using digital technologies are at the heart of this metaphor. These questions can also lead to self-discovery in the digital age through different challenges students can face, for example being without a smartphone for a day or projects such as the Bored and Brilliant project organised by the WNYC radio show Note to self (http://www.wnyc.org/series/bored-and-brilliant/). Wajcman has written about the paradox of loss of time when using digital technologies that save us time in more detail in her latest book (Wajcman, 2014). While the culture and political system metaphors dealt with many cultural and societal issues from a general standpoint, the instrument of domination metaphor emphasises the power issues of the code. Code is a tool for building structures and obtaining knowledge, and whoever has control over these structures and information has power over the users of the software or service. As Rushkoff points out, some of the issues created by code are inherent in the code itself, and some are created by the people developing code. An example of the former is the binary nature of the code that leads to a different mode of thinking that humans do. An example of the latter is the hijacking of the social connections that people form over the internet, meaning that the platforms that offer connections use those connections for their own purposes, such as harvesting data for market purposes, etc. (Rushkoff, 2010). Being aware of the power issues inherent in the code is crucial in forming a critical understanding of the code. Increased awareness of these issues and their origins on the level of code may help students to become more critical consumers, and it may also trigger changes in these platforms. When the students are able to detect controlling structures inherent in code, they are also empowered to challenge these structures, which may create a new power dynamic in the digital world. The former examples have been mostly about gaining skills (learning a programming language), learning how the world works (the ubiquity and influence of code) and debating what is preferable. The postmodern paradigm and the carnival metaphor highlight the creativity, emotions and experience in education about code. The postmodern paradigm emphasises the deconstruction and reconstruction of the concept of code. The carnival metaphor uses the code itself to challenge the idea of the code. It can encompass all the other metaphors or views of code to create a statement of itself. The tool it uses for this is the code itself. It shows how important arts and craft is in the understanding of the code. Not only can creativity be used to invent something, but it can also be used as a tool to understand code, or to critique code and its usage. Creating something by hand is an important tool in knowledge acquisition (Kojonkoski-Rännäli, 1998), and creative use of the code could be argued to be part of the craft skills of 21st century. The different viewpoints and suggestions for education around code and programming are summarised in Table 2. Our point is not to recommend that a particular metaphor should be followed and others ignored, or to suggest a ranking of the usefulness of the metaphors. Instead, we argue that all of the areas metaphors brings out should be included in the teaching of code and programming. As we proposed in the beginning it might be more fruitful to think about teaching programming in the basic curriculum to be more about improving code literacy, than about teaching coding as merely a mechanical skill. Code literacy includes both understanding the more ambiguous and multiplexed issues that exist around code, and the basic principles and logic of coding. The machine and organism metaphors in the functionalist paradigm set the basis for understanding code from the technical perspective. This helps to understand how code is used in more complex real world situations, as the brain and flux & transformation metaphors illustrated. The culture and political system metaphors help to broaden the scope towards societal issues, while the instrument of domination and psychic prison metaphors illustrate the coercive characteristics code can have. Finally, the postmodern paradigm and the carnival metaphor broaden the method of learning about code from thinking and discussing to experience and creativity. These metaphors may be implemented in several ways as a part of ICT education. The metaphors and the issues may be divided between different disciplines and may thus be more evenly distributed in existing school subjects. Or they can be studied as a whole in a phenomenon-based learning project, which can combine different school subjects together to form a larger picture of the subject. Or programming could be its own subject, where it would not only include mechanical knowledge of programming, but it would incorporate all The different issues we have brought forth in this article. Code could also be seen as a new subject: as a “digital survival skills for digital natives.” In Finland, recent plans to focus more on phenomenon-based learning discloses many interesting opportunities in teaching code and creating a broader understanding around it – improving code literacy. (Halinen, 2014). 5. Discussion & conclusion As coding and code literacy are gaining more popularity, what is meant by code becomes more important. However, the societal discussion around code is still fragmented and partly superficial, focusing only on a few points of view and more often on a mechanical understanding of the code. There is also traction between these different views. Our article illustrates ways of embracing the tensions, and also of raising the neglected aspects to the educational agenda. We propose that the aim should not be just on code and programming as a skill (coding), but also as a capability of better understanding the world and its structures. This understanding can be seen to become even more important in the future. We propose the metaphors as a useful heuristic for illustrating different viewpoints on code. However, some limitations can also be identified. From the theoretical side, the key question is do the metaphors adapted from the organisation and systems science cover every important aspects of the code? This relates to another limitation, that of the lack of empirical evidence. While we do illustrate the metaphors with examples, we have not presented an empirical case study where all the metaphors would be used. We believe that such a case study would be a fruitful direction for further research and would help to refine the metaphors. Furthermore an empirical case study would enable analyzing how different metaphors interact with each other, where are the main tensions, which metaphors are closely linked to each other etc. Further research could also focus on the social practices and historical backgrounds of these metaphors. These points are out of the scope of this article, as we have focused on describing the metaphors and using them as a lens to focus on various effects code has. Another strand of possible future research might be the focus on emancipatory paradigm and for example dissecting platform monopolies and the ways they govern the code. Related to this, interesting work regarding educational platforms has been done by Williamson. (For example see: https://codeactsineducation.wordpress.com). Our approach illustrates that there are multiple views of what code is and how it influences our everyday lives. This understanding may help to better reflect the needs of future education. The metaphors we have described can be used as one way to support the planning of education around coding as well as to structure the discussion around code and coding. From a societal standpoint, the metaphors help to identify the dominant metaphor and thus to understand the current direction of code-based issues. Contrasting the dominant metaphor with the alternative views proposed by the metaphors presents us with alternative future directions. However, we do not propose that any singular view is sufficient by itself. Rather, the focus should be on opening the discussion, allowing plural views and helping to take different views systematically into account. Acknowledgements The research leading to these results has received funding from the Strategic Research Council at the Academy of Finland under grant agreement no 293446 – Platform Value Now: Value capturing in the fast emerging platform ecosystems. References Fsf https://www.gnu.org/philosophy/free-sw.html Turtle, S. (2011). *Alone together: Why we expect more from technology and less from each other*. In Basic books. Washington, G. (2013). 84% of all stock trades are by high-frequency computers... only 16% are done by human traders. Zero Hedge. Retrieved September 23, 2015, from zero hedge.com/contributed/2012-17-26/84-all-stock-trades-are-high-frequency-computers-%E2%80%93only-16-are-done-human-trades Wilson, M., Scalise, K., & Gochyeyev, P. (2015). Rethinking ICT literacy: From computer skills to social network settings. *Thinking Skills and Creativity*, http://dx.doi.org/10.1016/j.tsc.2015.05.001
{"Source-Url": "https://research.aalto.fi/files/8318764/1_s2.0_S1871187116301055_main.pdf", "len_cl100k_base": 14920, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 78531, "total-output-tokens": 18805, "length": "2e13", "weborganizer": {"__label__adult": 0.0007371902465820312, "__label__art_design": 0.0031108856201171875, "__label__crime_law": 0.0007367134094238281, "__label__education_jobs": 0.062042236328125, "__label__entertainment": 0.0003514289855957031, "__label__fashion_beauty": 0.0003600120544433594, "__label__finance_business": 0.0007843971252441406, "__label__food_dining": 0.000903606414794922, "__label__games": 0.0012683868408203125, "__label__hardware": 0.0008282661437988281, "__label__health": 0.0010166168212890625, "__label__history": 0.0007648468017578125, "__label__home_hobbies": 0.00026988983154296875, "__label__industrial": 0.0005221366882324219, "__label__literature": 0.004650115966796875, "__label__politics": 0.0009016990661621094, "__label__religion": 0.0012140274047851562, "__label__science_tech": 0.04339599609375, "__label__social_life": 0.0004496574401855469, "__label__software": 0.01439666748046875, "__label__software_dev": 0.85986328125, "__label__sports_fitness": 0.0003843307495117187, "__label__transportation": 0.000850677490234375, "__label__travel": 0.00030875205993652344}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 83016, 0.04132]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 83016, 0.87017]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 83016, 0.91715]], "google_gemma-3-12b-it_contains_pii": [[0, 818, false], [818, 4478, null], [4478, 11303, null], [11303, 17078, null], [17078, 22799, null], [22799, 29193, null], [29193, 35070, null], [35070, 41067, null], [41067, 47097, null], [47097, 53259, null], [53259, 59419, null], [59419, 66324, null], [66324, 71569, null], [71569, 77838, null], [77838, 83016, null]], "google_gemma-3-12b-it_is_public_document": [[0, 818, true], [818, 4478, null], [4478, 11303, null], [11303, 17078, null], [17078, 22799, null], [22799, 29193, null], [29193, 35070, null], [35070, 41067, null], [41067, 47097, null], [47097, 53259, null], [53259, 59419, null], [59419, 66324, null], [66324, 71569, null], [71569, 77838, null], [77838, 83016, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 83016, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 83016, null]], "pdf_page_numbers": [[0, 818, 1], [818, 4478, 2], [4478, 11303, 3], [11303, 17078, 4], [17078, 22799, 5], [22799, 29193, 6], [29193, 35070, 7], [35070, 41067, 8], [41067, 47097, 9], [47097, 53259, 10], [53259, 59419, 11], [59419, 66324, 12], [66324, 71569, 13], [71569, 77838, 14], [77838, 83016, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 83016, 0.07273]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
1674463589a01c5fd00b7e3d018bb7519810f871
A Library-Based Approach to Task Parallelism in a Data-Parallel Language Ian Foster,* David R. Kohr, Jr.,* Rakesh Krishnaiyer,† and Alok Choudhary‡ *Mathematics and Computer Science Division, Argonne National Laboratory, Argonne, Illinois 60439; †Department of Computer and Information Science, Syracuse University, Syracuse, New York 13244; and ‡ECE Department, Technological Institute, Northwestern University, 2145 Sheridan Road, Evanston, Illinois 60208-3118 Pure data-parallel languages such as High Performance Fortran version 1 (HPF) do not allow efficient expression of mixed task/data-parallel computations or the coupling of separately compiled data-parallel modules. In this paper, we show how these common parallel program structures can be represented, with only minor extensions to the HPF model, by using a coordination library based on the Message Passing Interface (MPI). This library allows data-parallel tasks to exchange distributed data structures using calls to simple communication functions. We present microbenchmark results that characterize the performance of this library and that quantify the impact of optimizations that allow reuse of communication schedules in common situations. In addition, results from two-dimensional FFT, convolution, and multiblock programs demonstrate that the HPF/MPI library can provide performance superior to that of pure HPF. We conclude that this synergistic combination of two parallel programming standards represents a useful approach to task parallelism in a data-parallel framework, increasing the range of problems addressable in HPF without requiring complex compiler technology. 1. INTRODUCTION The data-parallel language High Performance Fortran version 1 (abbreviated simply as HPF in the following) provides a portable, high-level notation for expressing data-parallel algorithms [17]. An HPF computation has a single-threaded control structure, global name space, and loosely synchronous parallel execution model. Many problems requiring high-performance implementations can be expressed succinctly in HPF. However, HPF does not adequately address task parallelism or heterogeneous computing. Examples of applications that are not easily expressed using HPF alone [6, 14] include multidisciplinary applications where different modules represent distinct scientific disciplines, programs that interact with user interface devices, applications involving irregularly structured data such as multiblock codes, and image-processing applications in which pipeline structures can be used to increase performance. Such applications must exploit task parallelism for efficient execution on multicomputers or on heterogeneous collections of parallel machines. Yet they may incorporate significant data-parallel substructures. These observations have motivated proposals for the integration of task and data parallelism. Two principal approaches have been investigated. Compiler-based approaches seek to identify task-parallel structures automatically, within data-parallel specifications [11, 14, 21], while language-based approaches provide new language constructs for specifying task parallelism explicitly [3, 6, 19, 24]. Both approaches have shown promise in certain application areas, but each also has disadvantages. Compiler-based approaches complicate compiler development and performance tuning, and language-based approaches also introduce the need to standardize new language features. In this paper, we propose an alternative approach to task/data-parallel integration, based on specialized coordination libraries designed to be called from data-parallel programs. These libraries support an execution model in which disjoint process groups (corresponding to data-parallel tasks) interact with each other by calling group-oriented communication functions. In keeping with the sequential reading normally associated with data-parallel programs, each task can be read as a sequential program that calls equivalent single-threaded coordination libraries. The potentially complex communication and synchronization operations required to transfer data among process groups are encapsulated within the coordination library implementations. To illustrate and explore this approach, we have defined and implemented a library that allows the use of a subset of the Message Passing Interface (MPI) [13] to coordinate HPF tasks. MPI standardizes an interaction model that has been widely used and is well understood within the high-performance computing community. It defines functions for both point-to-point and collective communication among tasks executing in separate address spaces; its definition permits efficient imple- ments on both shared and distributed-memory computers [12]. Our HPF/MPI library allows these same functions to be used to communicate and synchronize among HPF tasks. This integration of two parallel programming standards allows us to incorporate useful new functionality into HPF programming environments without requiring complex new directives or compiler technology. We argue that the approach provides a conceptually economical and hence easily understood model for parallel program development and performance tuning. In brief, the contributions of this paper are as follows: 1. The definition of a novel parallel programming model in which group-oriented communication libraries are used to coordinate the execution of process groups corresponding to data-parallel tasks. 2. The demonstration that an HPF binding for MPI allows the range of problems efficiently expressible in HPF to be extended without excessive conceptual or implementation complexity. 3. The illustration and evaluation using realistic applications of design techniques for achieving communication between data-parallel tasks, for integrating MPI library calls into HPF programs, and for exploiting information provided by MPI communication calls to improve communication performance. A preliminary report on some of the techniques and results presented here appeared as [7]; the present paper provides a more detailed description of our techniques and introduces additional optimizations that improve performance by a factor of two or more in some situations. The problem of parallel program coupling has been investigated by a number of other groups, although not in this standards-based fashion. Groups building multidisciplinary models frequently build specialized “couplers” responsible for transferring data from one model to another. Coupler toolkits have been proposed and built, but not widely adopted. MetaCHAOS [5] provides a more general coupling tool by defining a model in which programs can export and import distributed data structures; MetaCHAOS handles communication scheduling. These various efforts are complementary to the work reported here in that they could all benefit from the efficient communication mechanisms used in our HPF/MPI library, if the models in question were written by HPF. In the rest of this paper, we describe the design and implementation of our HPF/MPI library, provide an example of its use, and evaluate its performance. In the implementation section, we focus on issues associated with point-to-point communication and describe techniques for determining data distribution information and for communicating distributed data structures efficiently from sender to receiver. We also show how specialized MPI communication functions can be used to trigger optimizations that improve performance in typical communication structures. We use microbenchmark experiments to quantify the costs associated with our techniques and the benefits of our optimizations. We also present results from multi-block and two-dimensional fast Fourier transform (FFT) and convolution codes that demonstrate that HPF/MPI can indeed offer performance advantages relative to pure HPF. 2. DATA AND TASK PARALLELISM We motivate our approach to the integration of task and data parallelism by discussing data parallelism and HPF and then reviewing approaches to the extension of the data-parallel model. 2.1. Data Parallelism and HPF Data-parallel languages allow programmers to exploit the concurrency that derives from the application of the same operation to all or most elements of large data structures [15]. Data-parallel languages have significant advantages relative to the lower level mechanisms that might otherwise be used to develop parallel programs. Programs are deterministic and have a sequential reading. This simplifies development and allows reuse of existing program development methodologies—and, with some modification, tools. In addition, programmers need not specify how data are moved between processors. On the other hand, the high level of specification introduces significant challenges for compilers, which must be able to translate data-parallel specifications into efficient programs [1, 16, 22, 27]. High Performance Fortran [17] is perhaps the best-known data-parallel language. HPF exploits the data parallelism resulting from concurrent operations on arrays. These operations may be specified either explicitly by using parallel constructs (e.g., array expressions and \texttt{FORALL}) or implicitly by using traditional \texttt{DO} loops. HPF addresses the problem of efficient implementation by providing directives that programmers can use to guide the parallelization process. In particular, distribution directives specify how data are to be mapped to processors. An HPF compiler normally generates a single-program, multiple-data (SPMD) parallel program by applying the \texttt{owner computes rule} to partition the operations performed by the program; the processor that “owns” a variable is responsible for updating its value [1, 22, 27]. The compiler also introduces communication operations when local computation requires remote data. An attractive feature of this implementation strategy is that the mapping from user program to executable code is fairly straightforward. Hence, programmers can understand how changes in program text affect performance. We use a two-dimensional fast Fourier transform (2-D FFT) to illustrate the application of HPF. The HPF implementation presented in Fig. 1 calls the subroutine \texttt{rowfft} to apply a one-dimensional (1-D) FFT to each row of the 2-D array \texttt{A}, and then transposes the array and calls \texttt{rowfft} again to apply a 1-D FFT to each column. The 1-D FFTs performed within \texttt{rowfft} are independent of each other and can proceed in parallel. The \texttt{PROCESSORS} directive indicates that the program is to run on eight virtual processors; the \texttt{DISTRIBUTE} directive indicates that \texttt{A} is distributed by row. This distribution allows the rowfft routine to proceed without communication. However, the transposition \( A = \text{transpose}(A) \) involves all-to-all communication. 2.2. Task Parallelism Certain important program structures and application classes are not directly expressible in HPF [6, 14]. For example, both real-time monitoring and computational steering require that programmers connect a data-parallel simulation code to another sequential or parallel program that handles I/O. The simulation task periodically sends arrays to the I/O task, which processes them in some way (e.g., displays them) and perhaps also passes control information back to the simulation. As a second example, we consider the 2-D FFT once again. Assume an array of size \( N \times N \) and \( P \) processors. Because the computation associated with the FFT scales as \( N^2 \log N \) while the communication due to the transpose scales only as max \( (N^2, P^3) \), the data-parallel algorithm described in Section 2.1 is efficient when \( N \) is much larger than \( P \). However, signal-processing systems must often process quickly a stream of arrays of relatively small size. (The array size corresponds to the sensor resolution and might be 256 \( \times \) 256 or less.) In these situations, an alternative pipeline algorithm is often more efficient [4, 14]. The alternative algorithm partitions the FFT computation among the processors such that \( P/2 \) processors perform the read and the first set of 1-D FFTs, while the other \( P/2 \) perform the second set of 1-D FFTs and the write. At each step, intermediate results are communicated from the first to the second set of processors. These intermediate results must be transposed on the way; since each processor set has size \( P/2 \), \( P^3/4 \) messages are required. In contrast, the data-parallel algorithm’s all-to-all communication involves \( P(P - 1) \) messages, communicated by \( P \) processors: roughly twice as many per processor. These two examples show how both modularity and performance concerns can motivate us to structure programs as collections of data-parallel tasks. How are such task/data-parallel computations to be represented in a data-parallel language such as HPF? Two principal approaches have been proposed: implicit approaches based on compiler technology and explicit approaches based on language extensions or programming environments for task coordination. Compiler-based approaches. Advocates of implicit, compiler-based approaches seek to develop more sophisticated compilers capable of extracting task-parallel algorithms from data-parallel specifications. Frequently, they will use new directives to trigger the application of specific transformations. This general approach has been used to exploit pipeline [14] and functional parallelism [21], for example. (A variant of the former approach has been incorporated in HPF version 2, but is not yet supported in commercial compilers.) Implicit, compiler-based approaches maintain a deterministic, sequential reading for programs. However, these approaches also tend to increase the complexity of the mapping from user program to executable code. This increased complexity can be a disadvantage for both programmers and compiler writers. For programmers, it becomes more difficult to understand how changes in program source affect achieved performance, and hence more difficult to write efficient programs. For compiler writers, it becomes more difficult to build compilers that generate efficient code, particularly because optimization techniques for different constructs and situations tend to interact in complex ways. Language-based approaches. Advocates of explicit, language-based approaches propose new language constructs that allow programmers to specify the creation and coordination of tasks explicitly. The basic concept is that of a coordination language [2, 9], except that because the tasks are themselves data-parallel programs, we obtain a hierarchical execution model in which task-parallel computation structures orchestrate the execution of multiple data-parallel tasks. Language-based approaches have been proposed that use a graphical notation [3], channels [6], remote procedure calls [19], and a simple pipeline notation [24] to connect data-parallel computations. Promising results have been obtained. Nevertheless, there is as yet no consensus on which language constructs are best. Since successful adoption depends on consensus and then standardization, language-based approaches clearly are not a near-term solution. 3. AN HPF BINDING FOR MPI Explicit task-parallel coordination libraries represent an alternative approach to the integration of task and data parallelism that avoids the difficulties associated with compiler-based and language-based techniques. We use the example of an HPF binding for MPI to illustrate the approach and to explore practical issues associated with its implementation. MPI provides a set of functions, data types, and protocols for exchanging data among and otherwise coordinating the execution of multiple tasks; a “binding” defines the syntax used for MPI functions and data types in a particular language. Previous MPI implementations have supported bindings only for the sequential languages C and Fortran 77 [12]. However, there is no reason why MPI functions may not also be used for communication among data-parallel tasks. Our HPF binding for MPI makes this possible. It is intended to be used as follows: - A programmer initiating a computation requests (using some implementation-dependent mechanism) that a certain number of tasks be created; each task executes a specified HPF program on a specified number of processors. - Tasks can call MPI functions to exchange data with other tasks, using either point-to-point or collective communication operations. In point-to-point communications, a sender and a receiver cooperate to transfer data from sender to receiver; in collective communications, multiple tasks cooperate—for example, to perform a reduction. When reading HPF/MPI programs, HPF directives can be ignored, and code understood as if it implements a set of sequential tasks that communicate using MPI functions. The source and destination arguments that appear in MPI calls denote IDs of the corresponding tasks involved in HPF/MPI. Figure 2 uses HPF/MPI to implement the pipelined 2-D FFT algorithm described in Section 2.2. Task 0 calls rowfft to apply a 1-D FFT to each row of the array $A$ ($8 \times 8$ complex numbers, distributed by row) and then calls the MPI function MPI_Send to send the contents of $A$ to task 1. Task 1 implements the transpose by using MPI_Recv to receive this data from task 0 into an array $B$, distributed by column, and then calls a subroutine colfft to apply a 1-D FFT to each column. The value 99 is a message tag. A comparison with Fig. 1 shows that the HPF/MPI version is not significantly more complex. In essence, we have replaced the transpose in the HPF program with two subroutine calls. Notice that these calls specify only the logical transfer of data from one data-parallel task to another: the potentially complex communication operations required to achieve this transfer are encapsulated within the HPF/MPI library. This example illustrates how a coordination library can gain leverage from a data parallel language’s high-level support for the management of distributed data structures and associated index translation operations, while providing an explicit, easily understood notation for specifying task-parallel computations. In more complex situations—such as multiblock codes—an HPF/MPI formulation can actually be more succinct than a pure HPF version. 4. IMPLEMENTATION A number of factors influenced the design of our prototype implementation of HPF/MPI. For example, we wanted our library to be portable among different hardware platforms, and to be able to operate with different HPF compilation systems. At the same time, we wanted typical HPF/MPI applications to achieve good performance with only modest effort by the programmer. 4.1. Design Overview We now describe the techniques that we have developed to address these requirements. For brevity, we examine only the case of point-to-point operations on distributed-memory multicomputers; elsewhere we discuss techniques for implementing other operations [8]. Figure 3 illustrates the basic processing steps performed by our library for a single point-to-point transfer. The actions taken by senders and receivers are symmetrical, so it suffices to examine just the processing steps of a send operation. These seven steps are as follows: 1. Distribution inquiry. Standard HPF inquiry intrinsics such as HPF_DISTRIBUTION are called to determine the distribution of the array being sent. 2. Extrinsic call. The portion of the library that is written in HPF calls a coordination library function that is written in C and declared as extrinsic (foreign) to HPF. This causes the execution model of each processor in the task to change from data-parallel (globally single-threaded) to SPMD (separate threads of control on each processor, as in HPF’s local mode of execution [17]). 3. Array descriptor exchange. Sending processors exchange distribution information with receiving processors about the source and destination arrays. After Step 1, all senders have distribution descriptors for the source array and all receivers have descriptors for the destination. We exploit this fact to avoid expensive broadcast operations and instead perform pairwise exchanges between individual senders and receivers. 4. Communication scheduling. Sending processors use the distribution information obtained in Step 3 to compute communication schedules, that is, the subsections of the source array that should be sent to each receiving processor. 5. Transfer buffer pack. Using the communication schedule computed in Step 4, we pack the array elements required by a particular receiver into a contiguous communication buffer. 6. Data send. The contents of the buffer packed in Step 5 are sent to the corresponding receiver. 7. Extrinsic return. By returning from the extrinsic function called in Step 2, the execution model of each processor reverts to data-parallel, so that execution of the HPF program may resume. Steps 5 and 6 are repeated once for each processor to which data must be sent. The order in which each sender transfers array subsections to each receiver is chosen so as to maximize parallelism among the individual transfers; a detailed description of this ordering appears in [18]. 4.2. Implementation Details Based on the above design, we have implemented a prototype HPF/MPI library that supports a subset of MPI’s point-to-point communication functions. This prototype operates with the commercial HPF compiler pghpf (version 2.0), developed by the Portland Group, Inc. [25]. Because of our desire for portability, we defined a run-time initialization interface between pghpf and HPF/MPI that minimizes the dependence of HPF/MPI upon the internals of the HPF runtime system. The interface establishes separate MPI communicators for each HPF task and for HPF/MPI, so that the communications of the HPF tasks and HPF/MPI cannot interfere with one another. We believe that this interface will work also with other HPF compilation systems that use MPI for communications. In some circumstances, it is desirable to reduce the total volume of communicated data by sending only a portion of an array, rather than an entire array. HPF permits programmers to denote portions of arrays using array section notation. Our implementation of HPF/MPI accepts array sections as the source or destination of a point-to-point operation. As an example, the following call sends just the first row of the source array A: call MPI_Send(A(1,:), N, MPI_FLOAT, 1, 99, MPI_COMM_WORLD) While developing HPF/MPI, we encountered design choices in which one must make tradeoffs between portability and performance. The tradeoffs center around whether HPF/MPI accesses distributed arrays using the portable extrinsic call mechanism, which copies arrays between the nonportable layout of a particular HPF compiler and the portable, contiguous layout used by C and Fortran 77. A system that does not use extrinsic calls, and instead accesses arrays directly in HPF’s internal representation, saves data copying at the cost of portability. We have implemented two different versions of HPF/MPI, one called “non-DIRECT” which uses extrinsic calls, and another (“DIRECT”) which avoids extrinsic calls by directly accessing arrays. In the next section we quantify the overhead of using the extrinsic call mechanism. Communication schedules are generated in Step 4 using algorithms based on the FALLS (FAmiLy of Line Segments) distributed array representation of Ramaswamy and Banerjee [20]. These algorithms compute the minimal sets of array elements that must be transferred from sending to receiving processors. The algorithms rely on modulo arithmetic and are highly efficient: for typical redistributions, their running time is proportional to the number of participating processors. As we shall see in the next section, schedule computation never constitutes more than a small fraction of total transfer time. MPI provides programmers with facilities for optimizing communication between processors. Many of these facilities are useful in the context of intertask communication also. For example, the functions MPI_Send_Init and MPI_Recv_Init define what are called persistent requests for point-to-point operations; once defined, a request can be executed repeatedly using MPI_Start. As illustrated in Fig. 4, MPI programmers can use these functions to indicate that the same data transfer will be performed many times. Our HPF/MPI implementation of these calls computes a communication schedule just once, when the request is defined. Subsequent calls to MPI_Start reuse the schedule, so that costs associated with Steps 1, 3, and 4 can be amortized over many transfers. In [8] we discuss how other MPI optimization features could be incorporated into HPF/MPI. !HPF$ processors pr(4) complex A(8,8) integer request !HPF$ distribute A(BLOCK,*) call MPI_Send_Init(A,8*8,MPI_COMPLEX,1,99, MPI_COMM_WORLD,request,ierr) do i = 1, 100 call read(A) call rowfft(8, A) call MPI_Start(request,ierr) end do FIG. 4. An alternative HPF/MPI formulation of the sending side of the pipelined 2-D FFT, in which MPI_Send_Init is used to define a persistent request that is then executed repeatedly by MPI_Start. 5. PERFORMANCE STUDIES We use a simple microbenchmark to quantify the costs associated with the implementation scheme just described. This “ping-pong” program, presented in Fig. 5, repeatedly exchanges a 2-D array of fixed size between two tasks. The array is distributed (BLOCK, *) in the sender and (*, BLOCK) in the receiver, which induces a worst-case communication pattern in which all senders must communicate with all receivers. We run the benchmark using tasks of varying size exchanging both small (4 KB) and large (4 MB) arrays. This allows us to determine how the cost components of transfer operations vary with task and array size. We measure three different versions of the benchmark: one that uses neither persistent operations nor direct access to HPF arrays (“Nonpersistent/Nondirect”), one that uses persistent operations but not direct access (“Persistent/Nondirect”), and one that uses both persistent operations and direct access (“Persistent/Direct”). By comparing these different versions, we can gauge the effectiveness of the persistent operation optimization and the cost of the extrinsic call mechanism. All experiments are performed on the Argonne IBM SP2, which contains 128 Power 1 processors connected by an SP2 multistage crossbar switch. We record the maximum execution time across all processors. As the underlying sequential communication library we use the portable MPICH implementation of MPI. 5.1. Description of Results The plots of Fig. 6 show the resulting measurements. Each vertical bar represents the one-way transfer time obtained from one experiment, and the shaded regions within each bar represent the fraction of time spent in the processing steps described in the previous section. For brevity, we have combined into one shaded region the times for corresponding steps in the sender and receiver. In addition, pack and unpack are combined as Message Assembly, and send and receive are labeled Data Transfer. We have also merged Extrinsic Return into Extrinsic Call. In studying these results, we first note that for small problem sizes (N), the total cost increases with the number of processors (P), while for large N, total time decreases with P. These results are to be expected: for small N, the dominant contributor to total communication cost is the message startup time, or latency, which increases with P; for large N, the dominant contributor is the message transfer time, which is proportional to message length and therefore decreases with P. ### 5.2. Processing Step Costs We now analyze the costs related to each of the processing steps. Steps 1, 3, and 4 are associated with determining how to perform a communication, and their costs are amortized over repeated transfers if persistent communications are used. These three cost components are shown uppermost in each bar, which in most cases allows us to distinguish the costs for nonpersistent and persistent communication. By comparing the Nonpersistent/Nondirect cases with the Persistent/Nondirect cases, we see that for small messages, using persistent operations results in a savings of up to 40% of the total time. The savings for large messages is negligible, because per-byte transfer costs dominate the total time. We note that the time for Step 3 (Array Descriptor Exchange) includes synchronization delays resulting from extra processing performed at receiving processors in other steps, such as communication and buffer unpacking at the end of the receive. Hence the high Step 3 times for large N and small P in the Nonpersistent/Nondirect case are an artifact of the experimental protocol, not a sign of inefficiency in the implementation of descriptor exchange. A similar synchronization effect causes increased times for Data Transfer in the two persistent cases. Step 2 (Extrinsic Call) represents the costs associated with the extrinsic call mechanism. This component represents a fixed cost for multiple subroutine calls, plus a per-byte overhead for copying array data between HPF’s memory layout and a contiguous layout. For P = 1 and an array of size 4 KB, Step 2 costs about 350 µs; for P = 1 and a 4 MB array, the cost is about 36 ms. These data suggest a fixed cost of roughly 300 µs and an incremental cost of about 0.0086 µs/byte (116 MB/s copy bandwidth). Because the source array in the ping-pong benchmark is an input argument to the send operation, and is not changed between sends, pghpf optimizes the extrinsic call by performing a copy during the extrinsic call of just the first send operation. In contrast, a copy must be performed during the extrinsic return step of each receive operation. Therefore the per-byte costs of Extrinsic Call in Fig. 6 reflect copying only on the receiving side. By comparing the Persistent/Nondirect and Persistent/Direct cases, we can evaluate the benefit of avoiding the extrinsic FIG. 6. Time required for a one-way HPF/MPI point-to-point communication on an IBM SP2, for various array sizes, task sizes, and implementation versions. FIG. 7. Execution time per input array for HPF and HPF/MPI implementations of the 2-D FFT application, as a function of the number of processors. Results are given for different problem sizes. call mechanism. For small arrays, elimination of the fixed extrinsic call costs improves performance by up to 30%. For large arrays, elimination of the copying performed during an extrinsic call provides improvements of up to 20%. Step 5 (buffer pack/unpack) corresponds to the costs of assembling messages from potentially noncontiguous locations before transmission, and disassembling them upon reception. Our implementation performs this assembly and disassembly explicitly in all cases; optimized implementations might be able to avoid this extra copying for some distributions on some platforms. For large messages the pack/unpack steps execute at a rate of about 64 MB/s. As we would expect, this is about half the rate achieved for the Extrinsic Call step, which performs copying in the receiver but not the sender. The final cost component is the actual communication (the Data Transfer shaded region). Since our transfer strategy permits senders to perform their transfers to receivers in parallel, we expect that the execution time of intertask transfers is governed by \[ P t_s + \left( \frac{N}{P} \right) t_b, \] where \( t_s \) is the per-message startup cost, \( N \) is the amount of data in the array (in bytes), and \( t_b \) is the per-byte data transfer time. The experimental data fit this simple model reasonably well. A more detailed model and more extensive analysis appear in [18]. 5.3. Performance Summary For large arrays, HPF/MPI achieves a bandwidth of about 12 MB/s in the two nondirect cases, and up to about 17 MB/s in the Persistent/Direct case. The underlying MPICH library can transfer data at a maximum rate of about 30 MB/s on the SP. Hence HPF/MPI achieves roughly half the bandwidth available on this platform. The data transfer rate for large arrays during the Data Transfer step is about 25 MB/s per sender–receiver processor pair, which indicates that transfers are proceeding in parallel at close to the maximum rate. The degradation in overall bandwidth in HPF/MPI compared to MPICH is due chiefly to the extra copying in the extrinsic call and buffer pack/unpack steps. In summary, the microbenchmark results show that the persistent communication optimization provides significant benefits when transferring small arrays; that our HPF/MPI implementation achieves reasonable performance for small arrays when the persistent communication optimization is applied, and for large arrays in all cases; and that a considerable performance improvement is realized by directly manipulating arrays stored in HPF’s internal representation. 6. APPLICATIONS We also studied the performance of HPF/MPI implementations of application kernel benchmarks like 2-D FFT, 2-D convolution, and multiblock codes, comparing each with an equivalent pure HPF program. In each case, we employ the persistent communication optimization when transferring data between tasks. Our results demonstrate that in most instances the HPF/MPI library achieves performance superior to that of pure HPF. 6.1. 2-D FFT The HPF/MPI and HPF implementations are based on the codes given in Figs. 2 and 1, respectively. For our experiments, we replace the read call in the 2-D FFT with a statement that initializes array \( A \), and eliminate the write call entirely. The code was tuned for good cache performance with an experimentally-determined blocking parameter. The HPF/MPI code is executed as a pipeline of two tasks, with an equal number of processors assigned to each task. Figure 7 presents our results, which are performed for a number of images large enough to render pipeline startup and shutdown costs insignificant. The execution times shown are the average per image. The speedup obtained over a sequential version of the code is shown in Fig. 8. The performance of the HPF/MPI version is generally better. In particular, for a fixed image size, HPF/MPI provides an increasing improvement in speedup as \( P \) increases. ![FIG. 9. Convolution algorithm structure. Two image streams are passed through forward FFTs and then to a pointwise matrix multiplication (MM) and inverse FFT.](image) 6.2. 2-D Convolution Convolution is a standard technique used to extract feature information from images [4, 23]. Images, represented as arrays of size \( N \times N \), are input in pairs on two streams, and convolution generates a single output stream of images of the same size. A single convolution operation involves transformation of the two input arrays using independent 2-D FFTs, a pointwise multiplication of the two transformed arrays, and the application of an inverse 2-D FFT on the resulting array to generate an output image (Fig. 9). A data-parallel convolution algorithm performs these steps in sequence for each pair of input images, while a pipelined algorithm can execute each rectangular block in Fig. 9 as a separate module. As in the 2-D FFT, this pipeline structure can improve performance by reducing the number of messages. Moreover, each module involves two 1-D FFTs, which are further pipelined as explained in the previous section. The HPF/MPI code consists of six tasks (a column task and a row task for each of the three modules), each of size \( P/6 \), where \( P \) is the total number of processors available for each experiment. The values of \( P \) were chosen to provide 1, 2, or 4 processors per task for the HPF/MPI version. Figure 10 shows our results. The graph compares the average of the total elapsed time between HPF and HPF/MPI for performing 2-D convolution on one data set. Once again, we see that the HPF/MPI version is often significantly faster than the pure HPF version. On the largest image size plotted (1024 \( \times \) 1024), HPF/MPI provides an improvement of up to 37% over pure HPF. A comparison of the speedups is shown in Fig. 11. 6.3. Multiblock Multiblock codes decompose a complex geometry into multiple simpler blocks [26]. A solver is run within each block, and boundary data are exchanged between blocks periodically. For our experiments, we use a program that applies a simple Poisson solver within each block and that supports only simple geometries [10]. For ease in HPF implementation, we fixed the number of blocks to 3. We chose a geometry such that each block is square, but the middle block has one-fourth the area of the end blocks. For example, the largest geometry in our experiment has end blocks of size 512 \( \times \) 512 and a middle block of size 256 \( \times \) 256. We chose values of \( P \) such that fewer processors were assigned the smaller middle block under HPF/MPI. In particular, for \( P = 5 \), two processors work on the end blocks and one on the middle (a mapping of 2/1/2); for \( P = 9 \) the mapping is 4/1/4; and for \( P = 18 \) the mapping is 8/1/8. We compare the performance of an HPF program that computes each of the three blocks in turn and an HPF/MPI program in which three tasks compute the three blocks concurrently. In the HPF version, each block is represented as one array which is distributed over all the available processors. In the HPF/MPI code, each task executes one block, and processors are allocated to blocks in proportion to their size. The blocks were distributed in a \((*, \text{ BLOCK})\) fashion for both HPF and HPF/MPI codes. Figures 12 and 13 show our results. The HPF/MPI program is always faster than the pure HPF program. This application is more communication intensive than the other two applications. The superior performance of the HPF/MPI code is due to lower communication overhead and better scalability. 7. CONCLUSIONS An HPF binding for MPI can be used to construct task-parallel HPF applications and to couple separately compiled data-parallel programs, without a need for new compiler technology or language extensions. Our implementation of this binding executes efficiently on multicomputers, allowing us to write task/data-parallel 2-D FFT, convolution, and multiblock codes that execute faster than equivalent codes developed in HPF alone. On the basis of these results, we argue that the combination of the HPF and MPI standards provides a useful and economical approach to the implementation of task/data-parallel computations. Microbenchmark results reveal various overheads associated with the HPF/MPI library. The MPI persistent request facility can be used to trigger optimizations that avoid overheads associated with exchange of distribution information and the computation of communication schedules. Overheads associated with the HPF extrinsic interface can be avoided by providing direct access to the internal representation used for HPF arrays. It is a topic for future research to determine the extent to which performance can be improved further by a tighter coupling between HPF/MPI and pghpf, by refining the HPF extrinsic interface, and by using compiler-derived information to select specialized communication functions. The ideas developed in this paper can be extended in a number of ways. It appears likely that similar techniques can be used to support other task interaction mechanisms. MPI and HPF extensions also suggest directions for further work. For example, MPI extensions proposed by the MPI Forum support client-server structures, dynamic task management, and single-sided operations. These constructs could be incorporated into an HPF/MPI system to support, for example, attachment to I/O servers and asynchronous coupling. Similarly, proposed support for mapping constructs within HPF (task regions) would allow the creation of task-parallel structures within a single program, by using HPF/MPI calls to communicate between task regions. ACKNOWLEDGMENTS We are grateful to the Portland Group, Inc., for making their HPF compiler and runtime system available to us for this research, and to Shankar Ramaswamy and Prith Banerjee for allowing us to use their implementation of the FALLS algorithm. The multiblock Poisson solver is based on a code supplied by Scott Baden and Stephen Fink. We have enjoyed stimulating discussions on these topics with Chuck Koelbel and Rob Schreiber. This work was supported by the National Science Foundation’s Center for Research in Parallel Computation under Contract CCR-8809615. REFERENCES IAN FOSTER received his Ph.D. in computer science from Imperial College in 1988. He is currently a scientist in the Mathematics and Computer Science Division of Argonne National Laboratory, and associate professor of Computer Science at the University of Chicago. His research interests include languages, software tools, and applications of parallel computers, and the techniques required to integrate high-performance computers into networked environments. He recently served as software architect for the I-WAY distributed computing experiment. DAVID R. KOHR, JR. graduated in 1988 from Washington University in St. Louis with a B.S. in computer science. From 1988 to 1991 he was on the staff of the MIT Lincoln Laboratory, developing software for radar data acquisition and signal processing. From 1991 to 1994 he was a graduate student at the University of Illinois at Urbana-Champaign, where he investigated tools and techniques for performance analysis of parallel application and system software, and from which he earned an M.S. in computer science. Since 1994 Kohr has been with Argonne National Laboratory, where he performs research on parallel run-time library support for communication, multithreading, and input–output. RAKESH KRISHNAIYER is a Ph.D. candidate in computer science at Syracuse University. He received his M.S. in the same field from Syracuse University in 1996 and his B.Tech in computer science and engineering from the Indian Institute of Technology, Madras in 1993. He is currently pursuing his research in the Mathematics and Computer Science Division of Argonne National Laboratory. His research interests are in parallel and distributed computing, compilers, languages, and networks. He is a member of the IEEE Computer Society and the ACM. ALOK CHOUDHARY received his Ph.D. from University of Illinois, Urbana-Champaign, in electrical and computer engineering, in 1989, and his M.S. from University of Massachusetts, Amherst, in 1986. He has been an associate professor in the Electrical and Computer Engineering Department at Northwestern University since September 1996. Alok Choudhary received the National Science Foundation’s Young Investigator Award in 1993 (1993–1999). His main research interests are in high-performance computing and communication systems and their applications in many domains including multimedia systems, information processing, and scientific computing. Alok Choudhary served as the conference co-chair for the International Conference on Parallel Processing, and is currently the chair of the International Workshop on I/O Systems in Parallel and Distributed Systems. Received December 9, 1996; revised July 9, 1997; accepted July 15, 1997
{"Source-Url": "http://users.eecs.northwestern.edu/%7Echoudhar/Publications/LibraryBasedApproachTakParallelismDataParallelLanguage.pdf", "len_cl100k_base": 8704, "olmocr-version": "0.1.49", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 33189, "total-output-tokens": 10054, "length": "2e13", "weborganizer": {"__label__adult": 0.0003843307495117187, "__label__art_design": 0.0004582405090332031, "__label__crime_law": 0.0004074573516845703, "__label__education_jobs": 0.0008935928344726562, "__label__entertainment": 0.00012886524200439453, "__label__fashion_beauty": 0.0002007484436035156, "__label__finance_business": 0.0003001689910888672, "__label__food_dining": 0.0004436969757080078, "__label__games": 0.0007033348083496094, "__label__hardware": 0.0034923553466796875, "__label__health": 0.00074005126953125, "__label__history": 0.0004565715789794922, "__label__home_hobbies": 0.00015497207641601562, "__label__industrial": 0.0009617805480957032, "__label__literature": 0.0002856254577636719, "__label__politics": 0.0003724098205566406, "__label__religion": 0.00069427490234375, "__label__science_tech": 0.280029296875, "__label__social_life": 0.00010764598846435548, "__label__software": 0.0115966796875, "__label__software_dev": 0.6953125, "__label__sports_fitness": 0.0004472732543945313, "__label__transportation": 0.0010585784912109375, "__label__travel": 0.00029206275939941406}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45831, 0.02933]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45831, 0.52217]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45831, 0.90173]], "google_gemma-3-12b-it_contains_pii": [[0, 4704, false], [4704, 10745, null], [10745, 16219, null], [16219, 19915, null], [19915, 25428, null], [25428, 30304, null], [30304, 30652, null], [30652, 34774, null], [34774, 38234, null], [38234, 43120, null], [43120, 45831, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4704, true], [4704, 10745, null], [10745, 16219, null], [16219, 19915, null], [19915, 25428, null], [25428, 30304, null], [30304, 30652, null], [30652, 34774, null], [34774, 38234, null], [38234, 43120, null], [43120, 45831, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45831, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45831, null]], "pdf_page_numbers": [[0, 4704, 1], [4704, 10745, 2], [10745, 16219, 3], [16219, 19915, 4], [19915, 25428, 5], [25428, 30304, 6], [30304, 30652, 7], [30652, 34774, 8], [34774, 38234, 9], [38234, 43120, 10], [43120, 45831, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45831, 0.0]]}
olmocr_science_pdfs
2024-11-24
2024-11-24
76881e536f16e6af91e93c8d02128f225c5c155f
FORMAL METHODS FOR LIFE-CRITICAL SOFTWARE Ricky W. Butler Sally C. Johnson NASA Langley Research Center Hampton, Virginia Abstract The use of computer software in life-critical applications, such as for civil air transports, demands the use of rigorous formal mathematical verification procedures. This paper demonstrates how to apply formal methods to the development and verification of software by leading the reader step-by-step through requirements analysis, design, implementation, and verification of an electronic phone book application. The current maturity and limitations of formal methods tools and techniques are then discussed, and a number of examples of the successful use of formal methods by industry are cited. Introduction From civil air transports to nuclear power plants, computer software is finding its way into more life-critical applications every year. This paper examines the available methods for avoiding/tolerating design faults in software and makes a case that fault-avoidance techniques such as formal methods are the only intellectually defensible means for producing life-critical software. The characteristics of formal methods and how they may be applied to software are then described and demonstrated on an example application. Finally, the maturity of formal methods for practical use by industry is examined and some limitations of formal methods are discussed. The validation of an ultra-reliable system must deal with two sources of error: 1. system failure due to physical component failure 2. system failure due to design errors There are well-known techniques for handling physical component failure—using redundancy and voting. The reliability assessment problem in the presence of physical faults is based upon Markov modeling techniques and is well understood. The design fault problem is a much greater threat. There are 3 basic approaches to dealing with the design fault problem. 1. Testing (Lots of it) 2. Design Diversity (i.e., Software Fault-Tolerance: N-Version Programming, Recovery Blocks, etc.) 3. Fault Avoidance (i.e., Formal Specification and Verification, Automatic Program Synthesis, Reusable Modules) The problem with life testing is that in order to measure the ultra-reliability one must test for exorbitant amounts of time. For example to measure a $10^{-4}$ probability of failure for a 1-hour mission one must test for more than $10^3$ hours (i.e., 114,000 years). There are many who advocate the use of design diversity to overcome the limitations of testing. The basic idea is to use separate design/implementation teams to produce multiple versions from the same specification. Then, through the use of threshold voters rather than exact-match voters, one can mask the effect of a design error in one of the versions while tolerating minor variations in calculations between versions. The hope is that the design flaws will manifest errors independently or nearly so. By assuming independence one can obtain ultra-high estimates of reliability even though the individual versions have failure rates on the order of $10^{-4}/\text{hour}$. When one examines the case for tolerance of physical faults, one finds that the only criterion that enables quantification of ultra-reliability for hardware systems with respect to physical failure is the independence assumption. However, the independence assumption has been rejected at the 99% confidence level in several experiments for low reliability software. Furthermore, the independence assumption cannot be validated for high reliability software because of the exorbitant test times required. If one cannot assume independence one must measure correlations. However, this is infeasible as well. To measure correlations between versions would require as much testing time as life-testing the system because the correlations must lie in the ultra-reliable region in order for the system to be ultra-reliable. It is not possible, within feasible amounts of testing time, to establish that design diversity achieves ultra-reliability. Consequently, design diversity can create an "illusion" of ultra-reliability without actually providing it. Since we cannot quantify the reliability of ultra-reliable software, we must develop our systems in a manner that eliminates errors in the first place. In other words, we concentrate our efforts on producing a correct design and implementation rather than on the process of quantification. Our confidence in the software is derived from our rigorous analysis rather than by experimentation. The Characteristics of Formal Methods Central to formal methods is the use of mathematical logic. Mathematical logic serves the computer system designer in the same way that calculus serves the designer of continuous systems—as a notation for describing systems and as an analytical tool for calculating and predicting the behavior of systems. In both design domains, computers can provide speed and accuracy for the analysis. Formal methods involve the specification of a system using languages based on mathematical logic. Formal methods provide a means for rigorous specification of desired properties as well as implementation details. Mathematical proof may be used to establish that an implementation meets the desired abstract properties. The most rigorous application of formal methods is to use semi-automatic theorem provers to ensure the correctness of the proofs. In principle, formal methods can accomplish the equivalent of exhaustive testing, if applied all the way from requirements to implementation. However, this requires a complete verification, which is rarely done in practice. The reason that correct software is difficult to produce, even with large amounts of testing, is at first surprising—after all, we have been designing complex engineering systems for decades. Table 1 compares computer systems with classical systems and illustrates why the traditional approach to validation is ineffective. Unlike physical systems that are subject to physical failure, in software, there's nothing to go wrong but the design. Our intuition and experience is with continuous systems—but software exhibits discontinuous behavior. We are forced to separately reason about or test millions of sequences of discrete state transitions. Most of the design complexity in modern systems is in the software. The problem is that the complexity exceeds our ability to have intellectual control over it. The term formal in "formal methods" refers to the idea that a proof can be known to be valid based upon its "form." In other words, the validity of a proof can be established by examining the syntax of an argument without regard to its semantics. The following argument: That animal is a cat All cats are sneaky Therefore, that animal is sneaky is valid independent of the meaning of "animal", "cat" or "sneaky." Thus, the following equivalent argument is also valid: That ⊗ is a □ All ⊗s are ⊗ Therefore, that ⊗ is a ⊗ Since the validity of a formal proof depends upon form only, a computer program can be used to check the validity of a proof without being supplied detailed domain-specific knowledge. Formal logic provides rules for constructing arguments that are sound because of their form and independent of their meaning. Formal logic provides rules for manipulating formulas in such a manner that only valid conclusions are deducible from premises. The manipulations are called a proof. If the premises are true statements about the world, then the soundness theorems of logic guarantee that the conclusion is also a true statement about the world. Assumptions about the world are made explicit, and are separated from rules of deduction. Logic provides the foundation for all mathematics. But traditional applications of mathematics have been to continuous systems, where highly developed bodies of theory (e.g., aerodynamics) remove practitioners from having to reason from the elementary logical underpinnings. But computer systems operate in a discrete domain; their operation is essentially a sequence of decisions, and each application is new. Therefore we must develop a specific theory about each one, directly in logic. Formal methods can be roughly divided into two basic components: specification and verification. Formal specification is the use of notations derived from formal logic to (1) describe the assumptions about the world in which a system will operate, (2) the requirements that the system is to achieve and (3) a design to accomplish those requirements. Formal verification is the use of proof methods from formal logic to (1) analyze specifications for certain forms of consistency and completeness, (2) prove that the design will satisfy the requirements, given the assumptions, and (3) prove that a more detailed design implements a more abstract one. The mathematics of formal methods include (1) predicate calculus (1\textsuperscript{st} order logic), (2) recursive function theory, (3) lambda calculus, (4) programming language semantics and (5) discrete mathematics—number theory, abstract algebra, etc. The following is a useful (first-order) taxonomy of the degrees of rigor in formal methods: \textit{Level-1:} Formal specification of all or part of the system. \\ \textit{Level-2:} Paper and pencil proof of correctness. \\ \textit{Level-3:} Formal proof checked by mechanical theorem prover. Level 1 represents the use of mathematical logic or a specification language that has a formal semantics to specify the system. This can be done at several levels of abstraction. For example, one level might enumerate the required abstract properties of the system, while another level describes an implementation, which is algorithmic in style. Level 2 formal methods goes beyond Level 1 through use of pencil-and-paper proofs that the more concrete levels logically imply the more abstraction-oriented levels. Level 3 is the most rigorous application of formal methods. Here one uses a semi-automatic theorem prover to ensure that all of the proofs are valid. The Level 3 process of convincing a mechanical prover is actually a process of developing an argument for an ultimate skeptic who must be shown every detail. One can also add a Level 0 to refer to software engineering techniques that do not involve mathematical logic in a significant way, such as statically testing for uninitialized variables and V&V activities such as formal inspections. Intuitively, higher levels of rigor provide greater confidence but at greater cost. It is also important to realize that formal methods is not an all-or-nothing approach. The application of formal methods to the most critical portions of a system is a pragmatic and useful strategy. Although a complete formal verification of a large complex system is impractical at this time, a great increase in confidence in the system can be obtained by the use of formal methods at key locations in the system. ## Formal Requirements Analysis In this section we will explore the process of writing a Level 1 formal specification of requirements. This will be done by way of example. Suppose we want to develop an electronic telephone book, and we wish to write down the requirements for it using formal methods. We begin with some informal English requirements: - Phone book shall store the phone numbers of a city - There shall be a way to retrieve a phone number given a name - It shall be possible to add and delete entries from the phone book ### Mathematical Representation of a Phone Book The first question that we face is how do we represent the phone book mathematically? There appear to be several possibilities: 1. As a set of ordered pairs of names and numbers. Adding and deleting entries via set addition and deletion. 2. As a function whose domain is all possible names and range is all phone numbers. Adding and deleting entries via modification of function values. 3. As a function whose domain is only names currently in the phone book and range is phone numbers. Adding and deleting entries via modification of the function domain and values. (\textit{Z} style) We decide to go with the second approach because it seems the simplest. In traditional mathematical notation, we would define the phone book as follows: \[ \begin{align*} \text{Let } N &= \text{ set of names} \\ \text{Let } P &= \text{ set of phone numbers} \\ \text{book} : N &\rightarrow P \end{align*} \] The set \( N \) represents all possible names, not just those in the city. Similarly the set \( P \) represents all possible phone numbers, not just those currently in service. How then do we indicate that we do not have a phone number for all possible names, only for names of real people? One possibility is to use a special number, that could never really occur in real life, e.g. 000-0000. We don’t have to specify the implemented value of this special number we can just give it a name: \( p_0 \in P \). Now we can define an empty phone book. In traditional notation, we would write: \[ \begin{align*} \text{emptybook} : N &\rightarrow P \\ \text{emptybook}(\text{name}) &\equiv p_0 \end{align*} \] Now we need to figure out how to represent English requirement 2: “There shall be a way to retrieve a phone number given a name.” We decide to use a function “FindPhone.” \[ \begin{align*} \text{FindPhone} : B \times N &\rightarrow P \\ \text{FindPhone}(bk, \text{name}) &= bk(\text{name}) \end{align*} \] where \( B = \text{ set of functions } : N \rightarrow P \). \text{FindPhone} returns a phone number when given a book and a name. Note that \text{FindPhone} is a higher-order function since its first argument is a function (i.e., its type is \( B \)). English requirement 3 stated, “It shall be possible to add and delete entries from the phone book.” We decide to model these activities with two functions “AddPhone” and “DelPhone”: \[ \begin{align*} \text{AddPhone} : B \times X \times P &\rightarrow B \\ \text{AddPhone}(bk, \text{name}, \text{num})(x) &= \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ \text{num} & \text{if } x = \text{name} \end{cases} \\ \text{DelPhone} : B \times N &\rightarrow B \\ \text{DelPhone}(bk, \text{name})(x) &= \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ p_0 & \text{if } x = \text{name} \end{cases} \end{align*} \] The specification for \text{AddPhone} reads as follows: if you add an entry to the phone book for \text{name} and then access entry \( x \), you get the original value \( bk(\text{name}) \) if \( x \neq \text{name} \) and \( \text{num} \) otherwise. Similarly, \text{DelPhone} states: if you delete the \text{name} entry from the phone book and then access \( x \), you get \( p_0 \) if \( x = \text{name} \) and \( bk(x) \) otherwise. We can now write the complete specification: Let \( N = \text{ set of names} \) \[ \begin{align*} P &= \text{ set of phone numbers} \\ \text{book} : N &\rightarrow P \\ p_0 &\in P \\ B &= \text{ set of functions } : N \rightarrow P \\ \text{FindPhone} : B \times N &\rightarrow P \\ \text{FindPhone}(bk, \text{name}) &= bk(\text{name}) \\ \text{AddPhone} : B \times N \times X \times P &\rightarrow B \\ \text{AddPhone}(bk, \text{name}, \text{num})(x) &= \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ \text{num} & \text{if } x = \text{name} \end{cases} \\ \text{DelPhone} : B \times N &\rightarrow B \\ \text{DelPhone}(bk, \text{name})(x) &= \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ p_0 & \text{if } x = \text{name} \end{cases} \end{align*} \] At this point we realize that our work is not completely satisfactory, for example: - Our specification does not rule out the possibility of someone having a “\( p_0 \)” phone number - We have not allowed multiple phone numbers per name The first question is an artifact of our particular specification; however, the second question is a result of a deficiency in the English specification. **Overcoming the Deficiencies** The first deficiency is that our requirements do not rule out the possibility of someone having a “\( p_0 \)” phone number. One way to overcome this problem is to use a flag to indicate when a phone number is valid. However, this would not help us at all with the second deficiency — no way to store multiple phone numbers per name. The most straight-forward solution to the second deficiency is to make the phone book map into a set of phone numbers rather than just a single phone number. This also solves deficiency 1—the emptyset can be used to represent the situation where there is no phone number instead of using a \( p_0 \) number. Thus, we have: \[ \begin{align*} \text{Let } N &= \text{ set of names} \\ P &= \text{ set of phone numbers} \\ \text{book} : N &\rightarrow 2^P \\ \text{book} : P &\rightarrow 2^P \\ \text{emptybook}(\text{name}) &\equiv \emptyset \end{align*} \] The notation \( 2^P \) represents the set of subsets of \( P \). Thus, a book is a function from the set of names into a set of subsets of phone numbers (i.e., given a name it will return a set of phone numbers). The empty set \( \emptyset \) can be used to represent the lack of a phone number for a name. The *Formal Analysis of Requirements* of the phone book. Let $N$ = set of names $P$ = set of phone numbers $\text{book} : N \rightarrow 2^P$ $\mathcal{P}$ = set of functions : $N \rightarrow 2^P$ $\text{emptybook}(\text{name}) \equiv \emptyset$ $\text{FindPhone} : B \times N \rightarrow P$ $\text{FindPhone}(\text{bk}, \text{name}) = \text{bk}(\text{name})$ $\text{AddPhone} : B \times N \times \mathcal{P} \rightarrow B$ $\text{AddPhone}(\text{bk}, \text{name}, \text{num})(x) =$ $$ \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ \text{bk}(\text{name}) \cup \{\text{num}\} & \text{if } x = \text{name} \end{cases} $$ $\text{DelPhone} : B \times N \rightarrow B$ $\text{DelPhone}(\text{bk}, \text{name})(x) =$ $$ \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ \emptyset & \text{if } x = \text{name} \end{cases} $$ Notice that the function *DelPhone* deletes all of the phone numbers associated with a name. Should the system be able to just remove one phone number associated with the name? The English requirements as written do not cover this situation. Clearly, the requirements must be corrected. If the facility to remove one of the phone numbers out of the set is needed, an additional function, say *DelPhone Num*, must be defined: $\text{DelPhone Num} : B \times N \times \mathcal{P} \rightarrow B$ $\text{DelPhone Num}(\text{bk}, \text{name}, \text{num})(x) =$ $$ \begin{cases} \text{bk}(x) & \text{if } x \neq \text{name} \\ \text{bk}(\text{name}) \setminus \{\text{num}\} & \text{if } x = \text{name} \end{cases} $$ Several aspects of the formal specification are significant. First, the specification is abstract and does not resemble program code. For example, the functions are defined over infinite domains. Second, the process of translating the requirements into mathematics has forced us to enumerate many things that are usually left out of English specifications. Third, the formal process exposes ambiguities and deficiencies in the requirements. For example one must chose between $\text{book} : N \rightarrow P$ and $\text{book} : N \rightarrow 2^P$ as the definition of the phone book. **Formal Analysis of Requirements** Although formal analysis can be carried out using pencil and paper, greater confidence in the analysis can be gained through use of a semi-automatic theorem prover, i.e. using Level 3 rigor. In order to use a theorem prover, the specification must be translated into the formal specification language used by the theorem prover. We will illustrate this process, using the PVS (Prototype Verification System) theorem prover.\cite{6,7,8} The specification becomes: $\text{book} = \{\text{name} \rightarrow \text{set}[\text{ph_number}]\}$ $\text{name} = \text{VAR names}$ $\text{emptybook}(\text{name}) = \emptyset$ $\text{bk} = \text{VAR book}$ $\text{FindPhone}(\text{bk}, \text{name}) = \text{bk}(\text{name})$ $\text{AddPhone}(\text{bk}, \text{name}, \text{num}) = \text{book} = \text{bk} \text{WITH } [\text{name} := \text{add}(\text{num}, \text{bk}(\text{name}))]$ $\text{DelPhone}(\text{bk}, \text{name}) = \text{bk} \text{WITH } [\text{name} := \emptyset]$ A few observations should make the PVS syntax understandable. The first two lines define the types *names* and *ph_number*. These represent the domains of names and phone numbers, respectively. The IMPORTING command makes the PVS sets library available to the specification. The notation $[\text{names} \rightarrow \text{set}[\text{ph_number}]]$ defines a function whose domain is *names* and whose range is $\text{set}[\text{ph_number}]$. The notation $\text{bk} \text{WITH } [\text{name} := \text{add}(\text{num}, \text{bk}(\text{name}))]$ defines a new function identical to *bk* except at the point *name*. The value of the new function at *name* is set equal to $\text{add}(\text{num}, \text{bk}(\text{name}))$, the original set *bk*(*name*) with *num* added to it. We can now analyze our requirements by posing challenges: “If this specification is correct, the following property should be true.” For example, if I add a phone number to a name, then the set returned by *FindPhone* should contain that entry: $\text{name} \in \text{FindPhone}(\text{AddPhone}(\text{bk}, \text{name}, \text{num}), \text{name})$ In PVS notation, we have $$ \text{Find Add lem}: \text{LEMA} \text{member}(\text{num}, \text{FindPhone(AddPhone(bk,name,num),name)}) $$ We issue the PVS prove command followed by a TCC command, a high-level strategy that is often able to automatically prove simple theorems. The system responds: Rewriting $\text{AddPhone(bk, name, num)}$ to ... Rewriting $\text{FindPhone}$ ... Rewriting $\text{member}(\text{num, bk(name)})$ to ... Rewriting $\text{add(num, bk(name))}$ to $\text{TRUE}$. Rewriting $\text{member}(\text{num, add(num, bk(name))})$ to $\text{TRUE}$. Trying repeated skolemization, instantiation, and if-lifting, Q.E.D. Run time = 3.80 secs. Real time = 10.48 secs. The PVS prover displays Q.E.D. which informs us that the theorem has been successfully proved. We have verified that our definition of FindPhone satisfies our expectation\(^1\). Encouraged by our success, we try another: \[ \text{Del}_\text{Add}_\text{lem}: \text{LEMMA} \\ \text{DelPhone(AddPhone(bk,name,num),name) = bk} \] This time our PVS proof effort leaves us with: \[ \text{Del}_\text{Add}_\text{lem}.1 : \\ [-1] \quad \text{name!1} = x!1 \\ [1] \quad \text{emptyset = bk!1(x!1)} \] Rule? This is not provable because \(bk!1(x!1)\) (which is equal to \(bk!1(\text{name!1})\)) is not necessarily equal to the empty set. We realize that after \text{DelPhone} removes \text{name} from the phone book that \(bk(\text{name})\) will be equal to the empty set only for the case that there were no phone numbers for \text{name} before the \text{AddPhone} function operates on the phone book. Thus, we must change the lemma to: \[ \text{Del}_\text{Add}_\text{lem}: \text{LEMMA emptyset(bk(name)) IMPLIES DelPhone(AddPhone(bk,name,num),name) = bk} \] At this point we have gained some additional insight into our requirements. Several questions arise that should be addressed in more detail in our requirements: - Should we add a "ChangePhone" function that alters the phone numbers for an already existing \text{name}. - Should we change the definition of \text{AddPhone} to only operate on non-existing names? - Should error messages be output from the functions? We will not pursue these questions further in this paper, but have raised them to illustrate how the putative theorem proving process can lead to a closer investigation of the requirements. \(^1\)Of course this is merely one of many properties we may wish to verify. Revising the Informal English Requirements One important product from the formal specification process is that it enables us to revise our English specification in a way that removes ambiguities. The original specification was - Phone book shall store the phone numbers of a city - There shall be a way to retrieve a phone number given a name - It shall be possible to add and delete entries from the phone book We now revise them to read: - For each name in the city, a set of phone numbers shall be stored (Should we limit the number?) - There shall be a way to retrieve the phone numbers given a name - It shall be possible to add a new name and phone number - It shall be possible to add new phone numbers to an existing name - It shall be possible to delete a name - It shall be possible to delete one of several phone numbers associated with a name - The user shall be warned if a deletion is requested on a name not in the city - The user shall be warned if a deletion of a non-existent phone number is requested There are many different ways to formally specify something. No matter what representation you chose you are making some decisions that bias the implementation. The goal is to minimize this bias and yet be complete. The process of formalizing the requirements can reveal problems and deficiencies and lead to a better English requirements document as well. Design Verification In this section we will briefly explore the techniques of design verification. This will be done by continuing with our phone book example. We decide to design our phone book using a hash table. For simplicity we assume that we have a hash function that will return a unique index into a multi-dimensional array for each name in the phone book. This is illustrated in Figure 1: The high-level design of the phone book can be specified in PVS as follows: Fig. 1. Data Structure for Phone Book High-Level Design | index: TYPE = {i: nat | i < max_names} | | nbufidx: TYPE = {i: nat | i <= max_numbers} | | nbufloc: TYPE = {i: posnat | i <= max_numbers} | | numbuf: TYPE = ARRAY[nbufloc -> ph_number] | | hashf: TYPE = function[names -> index] | | numlist: TYPE = [# last: nbufidx, nbuf: numbuf #] | | ibook: TYPE = ARRAY [index -> numlist] | | hash: hashf | | ibk: VAR ibook | | name: VAR names | | findphone(ibk, name): numlist | Similarly the high-level design for the "addphone" and "delphone" functions can be defined: ``` addphone(ibk, name, num): ibook = IF last(ibk(hash(name))) >= max_numbers THEN ibk ELSE % book is not full LET nl = ibk(hash(name))) IN ibk WITH [(hash(name)) := (# nbuf := nbuf(nl) WITH [(last(nl)+1) := num], last := (last(nl) + 1 #)] ENDIF ``` delphone(ibk, name): ibook = LET nl = ibk(hash(name))) IN ibk WITH [(hash(name)) := nl WITH [last := 0]] To show that the high-level design satisfies the requirements, we prove homomorphisms of the form: ``` Verif_condition: THEOREM NOT name_full(ibk, name) IMPLIES bmap(addphone(ibk, name, num)) = AddPhone(bmap(ibk), name, num) ``` In other words, if we start with a phone book ibk, add name to it, and then map it up to the requirements level with bmap, we obtain the same result as first mapping ibk up to the requirements level and then executing AddPhone. This is illustrated in Figure 2: **Introduction to Code-Level Verification** Presentation of the entire code-level specification, implementation and the corresponding formal verification is beyond the scope of this conference paper. However, some of the concepts involved can be introduced by way of a single procedure that could be used in the implementation of this phone book—an array search function. Let’s begin with an English specification of such a procedure: ``` The procedure searches an array “A” of length “N” for a value “X.” If it finds the element, then “Y” is equal to the “index” of the array element that is equal to “X” on exit from the return. If there is no element of the array equal to “X” then Y is equal to “0” on exit. ``` The following is a formal specification of this procedure: ``` pre-condition: N > 0 post-condition: {X = A[Y] \& (1 ≤ Y ≤ N) \wedge (Y = 0) \wedge (V: 0 ≤ k ≤ N) \implies A[k] \neq X)} ``` The “pre-condition” describes what must be true of the input variables when the procedure is called and the “post-condition” presents a property on the output variables that defines the behavior of the routine. The \( \wedge \) represents logical and and \( \vee \) represents logical or. This specification could be implemented with a variety of different search techniques, e.g., linear search, binary search etc. For simplicity a linear search algorithm is presented here: ``` function Lookup(var A: Array[1..N] of integer; x: integer): 1..N; var i, m, n : 1..N; label 11; {1 < N \wedge sorted(A) \wedge A[m] ≤ x < A[N]} begin m := 1; n := N; {m < n \wedge sorted(A) \wedge A[m] ≤ x < A[n]} while m + 1 < n do begin i := (m + n) div 2; if x < A[i] then n := i else if A[i] < x then m := i else begin Lookup := i; A[Lookup] = x goto 11 end end; {m + 1 = n \wedge sorted(A) \wedge A[m] ≤ x < A[n]} if A[m] ≠ x then {\exists k, (1 ≤ k ≤ N) \wedge (A[k] = x)} goto 11 else begin Lookup := m; {A[Lookup] = x} \vee (\exists k, (1 ≤ k ≤ N) \wedge (A[k] = x)) end where sorted(A) = \forall i, j, (1 ≤ i < j ≤ N) \implies (A[i] < A[j]) ``` Note that the pre- and post-conditions have been added to the text as comments. In addition a “loop invariant” has been supplied for each loop. This is a property that is true about the loop whenever one reaches that point in the loop. Once these annotations are made a set of “verification conditions” can be automatically generated using a tool such as Penelope.\(^5,10\) If these verification conditions (VC) can be shown to be theorems, then the program correctly implements the specification.\(^4\) For this program and specification, the following are the set of verifications that would be produced: 1. \( \{1 < N \wedge \text{sorted}(A) \wedge A[1] ≤ x < A[N]\} \subseteq \{1 < N \wedge \text{sorted}(A) \wedge A[1] ≤ x < A[N]\} \) 2. \( A[\text{Lookup}] = x \) \subseteq \( A[\text{Lookup}] = x \) \) 3. \( \{m + 1 = n \wedge \text{sorted}(A) \wedge A[m] ≤ x < A[n] \wedge A[m] = x\} \) \subseteq \( \{A[m] = x\} \) 4. \( \{\text{Failure}\} \subseteq \{\text{Failure}\} \) 5. \( \{m + 1 = n \wedge \text{sorted}(A) \wedge A[m] ≤ x < A[n] \wedge A[m] ≠ x\} \) \subseteq \( \{\exists k, (1 ≤ k ≤ N) \wedge (A[k] = x)\} \) 6. \( \{m < n \wedge \text{sorted}(A) \wedge A[m] ≤ x < A[n] \wedge (m + 1 < n) \) \subseteq \( \{A[[m + n] div 2] ≥ x \wedge (x ≥ A[[m + n] div 2])\} \) 7. \( \{m < n \wedge \text{sorted}(A) \wedge A[m] ≤ x < A[n] \wedge (m + 1 < n) \) \subseteq \( \{A[[m + n] div 2] ≤ x \wedge (x ≤ A[[m + n] div 2])\} \) 8. \( \{m < n \wedge \text{sorted}(A) \wedge A[m] ≤ x < A[n] \wedge (m + 1 < n) \) \subseteq \( \{A[[m + n] div 2] < x \wedge (x < A[[m + n] div 2])\} \) The overall process is illustrated in Figure 3. **Fig. 3.** The VC generation process One can see that the specification above is very detailed and deals specifically with implementation variables. In fact, code-level verification is usually the most time-consuming of all of the formal methods, because of the amount of detail that must be handled. The formal specification that drives the VC generation process can be connected to the upper-level design specifications to make a formal hierarchy as shown in Figure 4. The upper-level proofs are accomplished using the techniques of design proof described in the previous section. \(^5\)Of course this is true in practice only if the semantics of the language used for the VC generation match the actual semantics of the language employed and there are no bugs in the VC generator and compiler. requirements high-level design proof proof detailed design proof program code-level specification VCG Fig. 4. Hierarchical Specification Used to Prove High-Level Property Maturity of Formal Methods The major drawback cited by critics is that formal methods are too expensive and time-consuming to be practically applied. While this criticism was perhaps true twenty years ago, much progress has been made in development of formal methods languages, tools, and techniques. Most of the commercial application of formal methods has occurred in Europe. Most noteworthy is the IBM CICS Project.\(^\text{11}\) This project applied formal methods to an upgrade of a major on-line transaction-processing software package. The size of the upgrade was 13,230 lines of code. The project team claimed that 19 defects were avoided as a result of using formal methods. They also claimed a cost savings of 9% of total or $13 million saved. They used the Z specification language at the Level 1 level of rigor. Another noteworthy application of formal methods is the Innos/Oxford T800 Transputer Floating-Point Unit Project. This project involved the application of formal methods to the design of a hardware device. The T800 Transputer Floating-Point Unit Project originally begin with two separate, parallel developments: an informal development, supported by months of testing against other FPGs and a formal development using Z. Because the formal development moved far ahead of the informal team, the informal effort was terminated. Inmos claims a saving of 12 months in the development time. They received the Queen’s Award for Technological Achievement 1990. Another successful application of formal methods is the SACEM Railroad Signalling System.\(^\text{12}\) The objective of this project was to increase traffic movement by 25% (800,000 passengers/day). This involved 21,000 lines of Modula-2 code of which 63% was safety-critical. They used Level 2 rigor, performing manual proofs on the VCs. The validation effort for the total system was 100 man-years. The development team believes that the system is safer as a result of the use of formal methods. Meanwhile in the United States, the National Security Agency and the Defense Advanced Research Projects Agency (DARPA) have quietly funded quite a lot of formal methods research, resulting in significant advances in theorem-proving tools (e.g., Gypsy, EHDM, SDVS) and in the complexity of systems that can be formal verified (e.g., encryption devices, secure operating systems, microprocessors). NASA Langley Research Center has established a research program aimed at bringing formal methods technology to a sufficiently mature level for practical use on life-critical systems by United States aerospace and other industries and to facilitate the transfer of this technology through carefully orchestrated demonstration projects. Our research efforts are primarily concentrated on the technically challenging areas of digital flight-control systems design that are currently beyond the state of the art. Demonstration projects are focussed on problem domains where current formal methods technologies are deemed adequate but techniques and examples of how to apply them are absent. To overcome the sizeable “learning curve” associated with adoption of formal methods and their application to new problem domains, these demonstration projects are accomplished by establishing cooperative partnerships between industry and the developers of the formal methods tools and techniques. Our software demonstration projects began with formal verification of some simple utility routines obtained from the NASA Goddard Space Flight Center and the NASA Lewis Research Center. This work was performed by Odyssey Research Associates (ORA) using their Ada verification tool named Penelope.\(^\text{13}\) During this project, ORA demonstrated that the use of formal specification alone uncovered several errors in the routines and that the subsequent formal verification effort uncovered additional errors.\(^\text{10}\) In a second project, ORA formally specified the mode-control panel logic of a Boeing 737 experimental research aircraft using Larch (the specification language used by Penelope).\(^\text{14}\) We are participating with NASA Johnson Space Center and the Jet Propulsion Laboratory (JPL) to demonstrate the use of formal methods for space applications. In this project, we are working with space application experts from NASA Johnson, JPL, and IBM to • educate the application experts about the PVS prover and how to apply formal methods, • work jointly to develop a hierarchical set of formal specifications of the Jet-Select function of the NASA Space Shuttle, ranging from pseudo-code level to detailed-design level to abstract high-level specification, • demonstrate how to prove that each level specification is a valid implementation of the level above, and • demonstrate how to prove that the requirements-level specification meets a set of properties that the system is required to hold. Other demonstration projects related to software include: • formal specification and verification of floating point software for calculating trajectories of a ballistic missile; • formal specification of guidance and control system software for a planetary lander; • design, specification and verification of an operating system for a fault-tolerant, Reliable Computing Platform; and • development of a formal requirements definition language for flight-control software. This work along with the rest of NASA Langley’s research in formal methods is discussed in an overview paper presented at Compass 91.15 Since the Federal Aviation Administration (FAA) must approve any new methodologies for developing life-critical digital systems for civil air transports, their acceptance of formal methods is a necessary precursor to its adoption by industry system designers. Therefore, we have been working with the FAA and other regulatory agencies to incorporate credit for formal methods into the standards they set. We presented a tutorial to the FAA SWAT (SoftWare Advisory Team) at their request, and SRI International is currently writing a chapter for the FAA Digital Systems Validation Handbook on formal methods. We were instrumental in including formal methods as an alternate means of compliance in the DO-178B standard. **Limitations** It is important that the limitations of formal methods be recognized. For many reasons, formal methods do not provide an absolute guarantee of perfection, even if applied with Level 3 rigor. First, formal methods cannot guarantee that the top-level specification is what was intended. Second, formal methods cannot guarantee that the mathematical model of a physical device such as a hardware gate is accurate with respect to the physics of the device. The formal verification depends upon the the validity of the models of the primitive elements such as hardware gates. The mathematical model of a gate is merely a representation of the physical device. Some formal models just include logical properties. Other formal models include timing delays, but formal models typically do not include effects of temperature, EMI, manufacturing flaws, etc. Third, often the formal verification process is only applied to part of the system. Finally, there may be errors in the formal verification tools themselves. Nevertheless, formal methods provide a significant capability for discovering/removing errors in large portions of the design space. **Concluding Remarks** This tutorial-style paper describes in simple terms what formal methods is and how it can be applied to software. We believe that formal methods tools and techniques are already sufficiently mature to be practical and cost-effective in the development and analysis of life-critical software systems. Several examples of formally specified and verified systems support our position. The intellectual investment required to adopt formal methods is considerable. However, we see no acceptable alternative; the use of computer software in life-critical applications demands the use of rigorous formal specification and verification procedures. **References** [4] Knight, John, C.; and Leveson, Nancy, G.: A Reply To the Criticisms Of The Knight & Leveson Exper-
{"Source-Url": "https://ntrs.nasa.gov/api/citations/20040129612/downloads/20040129612.pdf", "len_cl100k_base": 9598, "olmocr-version": "0.1.50", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 43328, "total-output-tokens": 11340, "length": "2e13", "weborganizer": {"__label__adult": 0.0003342628479003906, "__label__art_design": 0.0003230571746826172, "__label__crime_law": 0.0003733634948730469, "__label__education_jobs": 0.0007085800170898438, "__label__entertainment": 6.014108657836914e-05, "__label__fashion_beauty": 0.0001556873321533203, "__label__finance_business": 0.0002579689025878906, "__label__food_dining": 0.0003573894500732422, "__label__games": 0.0004749298095703125, "__label__hardware": 0.0011281967163085938, "__label__health": 0.0005812644958496094, "__label__history": 0.0002703666687011719, "__label__home_hobbies": 0.00010311603546142578, "__label__industrial": 0.0004601478576660156, "__label__literature": 0.0002865791320800781, "__label__politics": 0.00023376941680908203, "__label__religion": 0.00043272972106933594, "__label__science_tech": 0.04071044921875, "__label__social_life": 8.374452590942383e-05, "__label__software": 0.00649261474609375, "__label__software_dev": 0.94482421875, "__label__sports_fitness": 0.00027251243591308594, "__label__transportation": 0.000720977783203125, "__label__travel": 0.00020563602447509768}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42834, 0.00848]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42834, 0.67662]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42834, 0.87101]], "google_gemma-3-12b-it_contains_pii": [[0, 3629, false], [3629, 8775, null], [8775, 12185, null], [12185, 17248, null], [17248, 22196, null], [22196, 25851, null], [25851, 27581, null], [27581, 31893, null], [31893, 36416, null], [36416, 40814, null], [40814, 42834, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3629, true], [3629, 8775, null], [8775, 12185, null], [12185, 17248, null], [17248, 22196, null], [22196, 25851, null], [25851, 27581, null], [27581, 31893, null], [31893, 36416, null], [36416, 40814, null], [40814, 42834, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42834, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42834, null]], "pdf_page_numbers": [[0, 3629, 1], [3629, 8775, 2], [8775, 12185, 3], [12185, 17248, 4], [17248, 22196, 5], [22196, 25851, 6], [25851, 27581, 7], [27581, 31893, 8], [31893, 36416, 9], [36416, 40814, 10], [40814, 42834, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42834, 0.02632]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
287bbe7405b632c2dc38021860de5b038498b510
Structural Embeddings: Mechanization with Method César Muñoz ICASE, Hampton, Virginia John Rushby SRI International, Menlo Park, California Institute for Computer Applications in Science and Engineering NASA Langley Research Center Hampton, VA Operated by Universities Space Research Association July 1999 STRUCTURAL EMBEDDINGS: MECHANIZATION WITH METHOD CÉSAR MUÑOZ* AND JOHN RUSHBY† Abstract. The most powerful tools for analysis of formal specifications are general-purpose theorem provers and model checkers, but these tools provide scant methodological support. Conversely, those approaches that do provide a well-developed method generally have less powerful automation. It is natural, therefore, to try to combine the better-developed methods with the more powerful general-purpose tools. An obstacle is that the methods and the tools often employ very different logics. We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better. Key words. semantic embeddings, formal notations, general verification systems, specification languages Subject classification. Computer Science 1. Introduction. In recent years, the capabilities of theorem provers oriented towards support of formal methods (we call them verification systems) have increased enormously. Systems such as ACL2 [24], Coq [5], Eves [42], HOL [14], Isabelle [36], and PVS [31] each come with a very rich specification language and a battery of decision procedures and proof strategies highly tuned to their logic. Some also provide convenient access to model checkers or to specialized decision procedures through built-in embeddings and interpretations, and some are able to generate efficiently executable code. This integration of rich specification languages with powerful automation allows general-purpose verification systems to attack very complex problems in a broad spectrum of domains [40]. A commonly-cited drawback to the use of these systems, is their lack of methodological support for the global process of specification and software development: with their emphasis on deductive support, the overall structure of a development is relegated to an external (informal or formal) methodology with little automated support. For this reason, some people complain that there is little method in formal methods. On the other hand, formal notations such as B [1], VDM [23], Z [44], and the requirements methodologies that employ tabular specifications [20, 26, 43] emphasize the methodological aspects of software specification and development. That is to say, they suggest how specifications should be structured and organized, how different specifications should be related to each other and to executable programs, and what theorems (i.e., "proof obligations") should be posed and proved in order to gain confidence in a specification or in the correctness of a refinement. These methods provide a formal notation and sometimes provide automated support for their methodological aspects, but usually their logic is supported only by relatively limited and specialized theorem provers, so that it can be tedious to discharge proof obligations, and difficult to establish properties of the overall specification. It is natural to ask whether the complementary strengths of general-purpose verification systems and of the more methodical formal notations can be combined in some way. One way to do this is by a semantic embedding of the formal notation within the logic of the verification system. Two variants have been identified: deep and shallow embeddings [10]. In a deep embedding, the language and semantics of the method are fully formalized as an object in the logic of the specification language. In this case, it is possible to prove meta-theoretical properties of the embedded method, but the statement and proof of properties for a particular application require painful encoding into the formalized semantics. In the shallow approach, there is a syntactic translation of the objects of the method into semantically equivalent objects in the language of the verification system. In this case, meta-theoretical properties cannot be stated, but the encoding and analysis of particular applications is simpler. Both of these approaches consider the formal notation as a unity and do not separate method from logic. This is consistent with the way most formal methods are presented—the methodological aspects of B, for example, are described in terms of a certain set theory [1], and a certain logic of partial terms is introduced to support the method of tabular specifications [34]. We question whether such unity—the tight coupling of method and logic—really is necessary. To our thinking, the method-specific aspects tend to be at the outermost, or "structural" levels of the specification language, and are not very sensitive to the actual logic employed for expressions inside the structure. For example, the tabular method employs tables to specify aspects of a system's requirements or behavior, but is largely indifferent to the logic in which table entries are specified, provided that it possesses certain attributes (e.g., an adequate treatment of partial functions). Given this perspective, we propose a new kind of embedding, in which the structural part of a method is embedded in the logic of the verification system (by means of either a shallow or a deep embedding, but most commonly the former), while the logic part of the method (its notation for expressions) is simply replaced by that of the verification system. By fitting the structural language elements of a method around a well-supported logic, we get the best of both worlds, and quite cheaply. Of course, this will not satisfy those who require the authentic language of a particular formal method, but it provides an attractive way to support the "style" of such a method, or to add methodological discipline to the raw logic of a verification system. In this paper we study this variation on embedding, which we call structural embedding. The paper is organized as follows. We give an overview of the notions involved in this kind of embedding in Section 2 and we describe examples in Sections 3 and 4. The final section compares this approach with others, and discusses how general-purpose verification systems could be restructured to better support this type of activity. 2. Structural Embedding. A formal method provides a specification language, which is built on a particular logic. Since formal methods are intended to organize formal specifications, the specification language is invariably structured in several syntactic levels. Usually, the outermost level concerns some notion of "module" and relationships among these, while the innermost level provides the expression language. Different names are used for the top-level module constructs in different specification languages: for example, machines in B, schemas in Z, theories in PVS. Specification languages usually provide several mechanisms to combine their modules in order to build large-scale systems. Most of the method in a formal method is expressed at this level. For example, invariants may be specified at the module level, giving rise to proof obligations on the operations specified within each module, or refinement relationships may be specified across modules, giving rise to further proof obligations. An embedding is a semantic encoding of one specification language into another, intended to allow tools for the one to be extended to the other. In our context, we are interested in embedding the specification language of a formal method into that of a verification system. Using embeddings, the complementary strengths of several formal methods and verification systems can be combined to support different aspects of verified software development. The semantics of the language of a formal method can be encoded in a verification system either by using an extra-logical translation (i.e., a kind of compiler), in which case we speak of a shallow embedding; or it can be defined directly in the specification language of the verification system, and in this case we talk of a deep embedding [10]. In a structural embedding, which is orthogonal to both of these, only the outermost level of the specification language is embedded in the logic of the verification system. The innermost level of the specification language is directly replaced, not embedded, by the expression language of the verification system. The logical framework of the embedded notation relies completely on the specification language of the verification system. We can describe the way this works as follows. Let $\mathcal{L}_\text{FM}$ and $\mathcal{L}_\text{VS}$ be the specification languages of a formal method and a verification system, respectively. By language abuse, we use the same symbols for their logics. We use the judgment $S \models_{\mathcal{L}} P$ to mean that $P$ is a property satisfied by the specification $S$ in the logic $\mathcal{L}$. In these terms, a semantic embedding is a translation $\cdot^*: \mathcal{L}_\text{FM} \mapsto \mathcal{L}_\text{VS}$ satisfying $$S \models_{\mathcal{L}_\text{FM}} P \Rightarrow \mathcal{L}_\text{FM}\_\text{in}\_\mathcal{L}_\text{VS} \land S^* \models_{\mathcal{L}_\text{VS}} P^*$$ where $\mathcal{L}_\text{FM}\_\text{in}\_\mathcal{L}_\text{VS}$ is the set of axioms and definitions in $\mathcal{L}_\text{VS}$ encoding the semantics of $\mathcal{L}_\text{FM}$. The shallow or deep degree of the embedding depends on the information contained in $\mathcal{L}_\text{FM}\_\text{in}\_\mathcal{L}_\text{VS}$. For a structural embedding, we consider that $\mathcal{L}_\text{FM}$ consist of two sub-languages $\mathcal{L}_\text{FM} = \mathcal{L}_\text{FM}^0 \cup \mathcal{L}_\text{FM}^1$, where $\mathcal{L}_\text{FM}^0$ represents the outermost level of language, and $\mathcal{L}_\text{FM}^1$ represents the innermost one. First, we construct $\mathcal{L}_\text{FM}' = \mathcal{L}_\text{FM}^0 \cup \mathcal{L}_\text{VS}$, which replaces the inner language by that of the verification system and adjusts $\mathcal{L}_\text{FM}^0$ (as $\mathcal{L}_\text{FM}^1$) to accommodate its new context while preserving its "intent." There is no formal relationship or mechanical translation between $\mathcal{L}_\text{FM}$ and $\mathcal{L}_\text{FM}'$—the goal is simply to preserve the ideas and intent of the method to the extent possible. A structural embedding is then a translation $\cdot^*: \mathcal{L}_\text{FM}^0 \mapsto \mathcal{L}_\text{VS}$, which is extended to $\cdot^*: \mathcal{L}_\text{FM}^1 \mapsto \mathcal{L}_\text{VS}$ in the obvious way (as the identity on $\mathcal{L}_\text{VS}$) satisfying $$S \models_{\mathcal{L}_\text{FM}} P \Rightarrow \mathcal{L}_\text{FM}^0\_\text{in}\_\mathcal{L}_\text{VS} \land S^* \models_{\mathcal{L}_\text{VS}} P^*$$ where $\mathcal{L}_\text{FM}^0\_\text{in}\_\mathcal{L}_\text{VS}$ is the set of axioms and definitions in $\mathcal{L}_\text{VS}$ encoding the semantics of $\mathcal{L}_\text{FM}^0$. Notice that the semantics of $\mathcal{L}_\text{FM}^1$ are not embedded, and that both of shallow and deep embedding are still possible for $\mathcal{L}_\text{FM}^0$. 3 To preserve intent in a structural embedding requires that well-formedness of specifications is preserved in both logics. That is, \[ \models_{\text{FM}} \text{Sound}_{\text{FM}}(S) \iff \mathcal{L}_{\text{FM}} \text{-in-} \mathcal{L}_{\text{VS}} \models_{\text{VS}} \text{Sound}_{\text{VS}}(S^*) \] By \text{Sound}_{\mathcal{L}}(S), we mean the set of formulas (proof obligations) that guarantees some method-specific well-formedness property of specification \( S \) in logic \( \mathcal{L} \) (e.g., the checks for overlapping or missing conditions in a tabular specification). Formal methods are often concerned with metalogical relationships between specifications (e.g., that one should be a refinement of another, or that one should be an invariant for the other), and \text{Sound} is then extended to the proof obligations that ensure satisfaction of the desired relationship. Notice that \text{Sound} is parameterized by the logic. In practice, we expect that \text{Sound} relies only on very general properties of a logic, so that proof obligations retain their intuitive content under the structural embedding. In the following two sections we present concrete examples of structural embeddings. 3. The B-Method in PVS. In this first example, we describe a structural embedding of the B-method in the higher-order logic of PVS. The B-method [1] is a state-oriented formal method mainly intended for development of sequential systems. The underlying logic of the method is a set theory with a first-order predicate calculus. PVS [31] is a verification system whose specification language is a higher-order logic with a type system. PVS does not come with a particular built-in methodology. 3.1. An Overview of the B-Method. In B, specifications are structured in modules called machines. Machines can be of three kinds: abstract machines, refinements, and implementations. Each kind of machine corresponds to a different stage of software development. The initial specification of a problem is given by a set of abstract machines. Refinements allows data reification of specifications. Final refinements, those that are not intended to be refined anymore, are called implementations. A machine is an abstract description of the statics and dynamics of a system. Statics are given by a state declaration: constants, properties of the constants, variables, and an invariant (a property satisfied by the state of the machine). Dynamics are given by operations or services provided by the machine. In contrast to other stated oriented methods, operations in B are not specified by before-after predicates, but by an equivalent mechanism of predicate transformers called generalized substitutions. Large software development is supported using several composition mechanisms. These mechanisms give different access privileges to the operations or to the local variables of an external machine. In this way, it is possible to build complex machines incrementally by using previously defined ones. Thus, by using the unified notation of machines, B supports the complete life cycle of software development. Several cases studies of developments in B are reported in [7]. That work pointed out some drawbacks of the B-method: - Although typing conditions can be handled using the set theory provided by B, mathematical objects such as variables or functions are not explicitly typed. In some cases this “free-typing” style obscures the specifications. - The generalized substitutions mechanism encourages the writing of algorithmic specifications. Some kind of operations could be more naturally expressed by before-after predicates. The same conclusion was drawn by Bicarregui and Ritchie in [8]. - Support for data types is limited. In particular, record types are absent in the B notation. • Proof obligations usually deal with type conditions that could be easily solved by a type checker. • B imposes a very rigid discipline. For instance, parameters of a machine are restricted to be scalars or uninterpreted sets. In some cases such restrictions seem to be very strong. Most of these criticisms concern the limitation of the formal notation rather than the methodological aspects of B. We argue that it is possible to separate the abstract machine mechanism from its specification language, and to use the expression language of PVS instead of that of B. In this way, we combine the best features of each technique: the methodology of B, and the expressiveness and richness (and automation) of the specification language of PVS. 3.2. An Example: A Drinks Dispenser Machine. To concretize our ideas, we present in Figure 3.1 an example of a drinks dispenser specification written in B by Leno and Haughton [25]. The specification is, for most of the parts, self-explanatory. At first glance, the expressions of the machine Dispenser could be easily translated to PVS. For instance, the invariant dstate ∈ DSTATE ∧ given ∈ NAT ∧ given ≤ lifetime literally corresponds to the PVS expression member(dstate,DSTATE) AND member(given,NAT) AND given ≤ lifetime. However, the PVS specification language is fully-typed while the B notation is not. For instance, although it is possible to define a set in PVS containing all the natural numbers, the normal way to handle a property like given ∈ NAT in PVS is by using a type declaration given:NAT—the natural numbers are a basic type in PVS, whereas they are a predefined set in B. Thus, in PVS, the invariant is reduced to given ≤ lifetime. and its other two clauses become typing judgments. In Figure 3.2 we present a fully typed version of the dispenser machine which uses the expression language of PVS. Notice also that PVS machines use a clause TYPES rather than the original clause SETS of B. From the PVS point of view, DSTATE is not a set, but a type. Its role in the specification is not that of a container, but that of a typing tag. Also note that functions are not interpreted as binary relations in PVS, but as computational objects. 3.3. Semantics. The semantics of the B-method is described in [1] in terms of a particular set theory and a first-order logic. Roughly speaking the soundness of a specification is given by the validity of a set of axioms extracted from the machines. These axioms are usually called proof obligations. The more important axioms concern the preservation of the invariant by the operations. In general, these proof obligations have the form: PROPERTIES ∧ INvariant ⇒ [operation] INVARIANT. As noted before, operations are defined in B as predicate transformers. Thus, for example, the proof obligation concerning the initialization clause of the machine Dispenser states that after the initialization of the machine, the invariant is satisfied. Formally, it states that the following proposition holds: 1In fact, in B, NAT is the predefined set of naturals between 1 and maxint, where maxint is not known a priori. PVS can also represent this as a type: subrange(1,maxint). MACHINE Dispenser(lifetime) SETS DSTATE = \{ stocked, unstocked \} CONSTANTS ok, notok PROPERTIES ok = 0 ∧ notok = 1 VARIABLES dstate, given INVARIANT dstate ∈ DSTATE ∧ given ∈ NAT ∧ given ≤ lifetime INITIALIZATION dstate := unstocked || given := 0 OPERATIONS restock = dstate := stocked; give_drink = PRE dstate = stocked ∧ given < lifetime THEN dstate ∈ DSTATE || given := given+1 END; bb ← is_stocked = IF dstate = stock THEN bb := ok ELSE bb := notok END; count ← number_given = count := given END FIG. 3.1. A Drinks Dispenser in B Dispenser_in_PVS [ lifetime:nat ] : MACHINE BEGIN TYPES DSTATE = {stocked, unstocked} CONSTANTS ok : nat = 0 notok : nat = 1 VARIABLES dstate : DSTATE given : nat INVARIANT given <= lifetime INITIALIZATION dstate := unstocked || given := 0 OPERATIONS restock = dstate := stocked give_drink = PRE dstate = stocked AND given < lifetime THEN dstate :: DSTATE || given := given + 1 END is_stocked : nat = IF dstate = stocked THEN ok ELSE notok ENDIF count : nat = given END Dispenser_in_PVS FIG. 3.2. The Drinks Dispenser Machine Structurally Embedded in PVS ok = 0 ∧ notok = 1 ⇒ [dstate:=unstocked || given:=0] INVARIANT. That is ok = 0 ∧ notok = 1 ⇒ unstocked ∈ DSTATE ∧ 0 ≤ lifetime, which is trivially true.² As pointed out before, a major difference between the specifications given in Figures 3.1 and 3.2 is that PVS machines are based on the higher-order logic and type theory of PVS. In particular, a B machine is embedded as a PVS theory, where the parameters and types of the machine become parameters and types of the theory. The state of a B machine is encoded in the functional style of PVS as follows. The variables of the machine define a record type, called the general type. Each field of the record corresponds to a variable of the machine. The invariant of the machine is expressed as a subtype of the general type. In this way, the mutual dependence between the variables given by the constraints is handled by the dependent type mechanism of PVS. The general type defined for Dispenser_in_PVS is \[ \text{Dispenser\_in\_PVS\_Type : TYPE} = \left[\begin{array}{c} dstate : \text{DSTATE}, given : \text{nat} \end{array}\right] \] (Record types in PVS are declared by using the brackets \([#, #]\). Instances of a record type are given between (#, #) parentheses. Record and function overriding are indicated in PVS by the WITH construct.) The invariant of the machine is handled by the following type: \[ \text{Dispenser\_in\_PVS : TYPE} = \\ \{ \text{self: Dispenser\_in\_PVS\_Type | given(self) <= lifetime } \} \] An operation op of a machine M with inputs \(i_1:I_1, \ldots, i_n:I_n\) and outputs \(o_1:O_1, \ldots, o_m:O_m\) is translated into PVS as a function \[\text{op}(i_1:I_1, \ldots, i_n:I_n)(self:M): [o_1:O_1, \ldots, o_m:O_m, self\_out:M].\] If op has no inputs and outputs, its signature is simply \(\text{op}(self:M):M\). For instance: \[ \text{restock}(self: \text{Dispenser\_in\_PVS}) : \text{Dispenser\_in\_PVS} = \\ \text{LET self =} \\ \text{self WITH [} \\ \text{dstate := stocked} \\ \text{]} \quad \text{IN} \\ \text{self} \] Generalized substitutions are interpreted as PVS expressions dealing with record field overriding, function updating, set operations, and typing conditions. Certain kinds of compositions are supported by using the importing mechanism of PVS. The complete embedding is described in [28]. Soundness of a B machine corresponds to type correctness of the PVS theory embedding it. Therefore, the proof obligations to be checked are just the type correctness conditions (TCCs) generated by the PVS ²In B, lowercase parameters, as lifetime, are assumed to be scalars. type system, and so it is possible to use the automation provided by the PVS type-checker and theorem prover. The type correctness conditions generated for the PVS embedding of a B machine guarantee that the initial state satisfies the invariant and that the invariant is preserved by the operations. PVS generates four TCCs for the machine Dispenser_in_PVS. All of them are automatically discharged by the theorem prover. For instance, the TCC corresponding to the initialization clause is \texttt{init\_TCC1 :} \[ \begin{aligned} \{1\} \quad \forall (\texttt{self}) : \\ \quad \texttt{self} = (\# \texttt{dstate} := \texttt{unstocked}, \texttt{given} := 0 \#) \Rightarrow \\ \quad 0 \leq \texttt{lifetime} \end{aligned} \] The embedding that we have described corresponds to a shallow structural embedding. That is, meta-theoretical properties about the abstract machine notation cannot be proved. It has been completely implemented by a front-end tool called PBS [28]. An alternative deeper embedding has been proposed in [9]. That work formalizes the generalized substitution mechanism of the B-method in the higher-order logic of Coq and PVS. In this case, it is possible to verify meta-theoretical properties about generalized substitutions. 3.4. The PBS System. PBS works like a compiler. It takes as input a file \texttt{m.bps} containing an abstract machine and generates its corresponding embedding as a PVS theory in the file \texttt{m.pvs}. We have rewritten several examples of abstract machines from [1,25,29] in PBS. The results obtained are satisfactory according to our expectations: trivial type conditions are discharged automatically by the type checker of PVS, and most of the other proof obligations can be solved by the automated decision procedures and strategies provided by its theorem prover. Table 3.1 summarizes one of these developments. Client, Product, and Invoice are part of an invoice system developed in [1]. The example provides the basic functionality of a data processing system. During the development, the type checker of PVS allowed us to find some minor errors in the specification given in [1]. <table> <thead> <tr> <th>Machine</th> <th>PBS (in lines)</th> <th>PVS theory (in lines)</th> <th>TCCs</th> <th>Auto proved</th> </tr> </thead> <tbody> <tr> <td>Client</td> <td>56</td> <td>83</td> <td>12</td> <td>100%</td> </tr> <tr> <td>Product</td> <td>66</td> <td>92</td> <td>18</td> <td>83%</td> </tr> <tr> <td>Invoice</td> <td>125</td> <td>166</td> <td>48</td> <td>87%</td> </tr> </tbody> </table> Büchi [11,12] describes a prototypical banking application implemented in two commercial tools supporting the B-method: Atelier B from Steria and the B-Toolkit from B-Core. Bank is the largest machine of that system, and we have rewritten it in PBS. In Table 3.2, we compare our metrics for this example with those given by Büchi.\(^3\) The difference between the size of the files is due to the fact that many properties are attached to the types of the variables and parameters in the PBS specification and therefore need not be repeated in the \(^3\)For these developments we are using PVS Version 2.3. TABLE 3.2 Comparison Between B and PBS Machines <table> <thead> <tr> <th>Machine</th> <th>File length (in lines)</th> <th>Proof obligations</th> <th>Auto proved</th> </tr> </thead> <tbody> <tr> <td>Bank in PBS</td> <td>232</td> <td>47</td> <td>94%</td> </tr> <tr> <td>Bank in B</td> <td>362</td> <td>49</td> <td>95%</td> </tr> </tbody> </table> invariant and the pre-conditions to the operations, making the specification shorter. The proof obligations of the PBS and B machines do not correspond one-to-one either: recall that proof obligations in PBS machines are generated by the type checker of PVS, which is able to solve some type conditions internally, and to subsume some type conditions in others. A feature introduced in PVS Version 2.3 allows PVS “ground terms” (i.e., executable definitions applied to concrete data) to be evaluated via compilation into Lisp. The compiler (due to N. Shankar) uses sophisticated static analysis to eliminate some of the inefficiencies of applicative programs, so that compiled PVS executes extremely rapidly. Combined with the refinement mechanism of the B-Method, this provides good support for rapid prototyping, testing, and code generation. For example, by refining the PVS choice function that interprets the ANY construct of B into a linear search, we obtain a rapid prototype for the B-Bank that can perform many thousand Bank operations (create an account, make a deposit, perform a balance enquiry, etc.) per second. PBS and some of the examples that we have developed are available electronically at: http://www.csl.sri.com/~munoz/src/PBS. 4. Tabular Representations. Several methods for documentation and analysis of requirements make some use of tabular specifications. These include methods such as SCR and CoRE that are derived from the “four variable model” of Parnas [35], the RSML notation of Leveson [26], and the decision tables of Sherry [43]. All these methods can be considered as having two levels of “structure” above their base logic: the top level provides the attributes that are unique to each method, but the lower level is broadly similar across all of them: it is the use of tables to define functions by cases. A simple example is the following definition of the function sign(x), which returns -1, 0, or 1 according to whether its integer argument is negative, zero, or positive. \[ \text{sign}(x) = \begin{array}{c|c|c} x < 0 & x = 0 & x > 0 \\ -1 & 0 & +1 \\ \end{array} \] This is an example of a piecewise continuous function that requires definition by cases, and the tabular presentation provides two benefits. - It provides a visually attractive presentation of the definition that eases comprehension. - It makes the cases explicit, thereby allowing checks that none of them overlap and that none have been forgotten. The checks for forgotten and overlapping cases generate proof obligations that have been shown to be a potent tool for error detection [20]. The structural properties of tables interact with well-definedness concerns for the underlying logic, as seen in the following table from [33, Figure 1] where the applications of the (real-valued) square root function in the second and third rows can only be shown to be well-defined (that is, to have nonnegative arguments) when the corresponding row constraints are taken into account. <table> <thead> <tr> <th></th> <th>( y = 27 )</th> <th>( y &gt; 27 )</th> <th>( y &lt; 27 )</th> </tr> </thead> <tbody> <tr> <td>( x = 3 )</td> <td>( 27 + \sqrt{27} )</td> <td>( 54 + \sqrt{27} )</td> <td>( y^2 + 3 )</td> </tr> <tr> <td>( x &lt; 3 )</td> <td>( 27 + \sqrt{(x - 3)} )</td> <td>( y + \sqrt{(x - 3)} )</td> <td>( y^2 + (x - 3)^2 )</td> </tr> <tr> <td>( x &gt; 3 )</td> <td>( 27 + \sqrt{x - 3} )</td> <td>( 2 \times y + \sqrt{x - 3} )</td> <td>( y^2 + (3 - x)^2 )</td> </tr> </tbody> </table> Another interaction is seen when tables allow “don’t care” and blank entries (which must be shown to be unreachable). An example of the latter is the quotient lookup table for an SRT divider shown at right. The notorious Pentium FDIV bug was due to bad entries in similar table. The triangular-shaped blank regions at top and bottom of these tables are never referenced by the division algorithm; the Pentium error was that certain entries believed to be in this inaccessible region, and containing arbitrary data, were, in fact, sometimes referenced during execution [37]. Proof obligations to show that such regions truly are unreachable can help avoid such errors [27,39]. Notice that the logic required to provide an interpretation for tables with blank entries must be one that provides either partial functions, or dependent typing. Parnas [34] proposes a partial term logic similar to that of Beeson [6, Section 5] for dealing with these complexities. Parnas’ approach is perfectly satisfactory, but we contend that tables are a structural element that can be hosted, with suitable adjustments and restrictions, on almost any logic. In particular, the predicate and dependent typing of PVS [41], although quite different to Parnas’ logic, provides an adequate foundation for a very rich set of tabular constructions. The structural embedding of tables into PVS is a shallow one that differs from the PBS embedding of B by being integrated directly into PVS using an intermediate COND construct [30]. It would have been perfectly feasible to use an external translation similar to that of PBS, but tables seemed of sufficiently general utility that we preferred a more tightly integrated implementation. The specific tabular constructions of SCR, RSML, and Sherry can then be encoded into the generic PVS tables using techniques described in [30]. The structural embedding of tables in PVS can be compared with an alternative approach where theorem provers have been used as back-ends to method-specific table analyzers. One example is RSML, where proof obligations generated by a dedicated tool have been submitted to a BDD-based tautology checker [19], PVS [18], and the Stanford Validity Checker (SVC) [32]. In all these cases, the back-end tools are used only to examine proof obligations that ensure no overlapping or forgotten cases: they do not have access to other specification properties (e.g., they would not be able to state or prove that \( \text{sign}(x) \) is idempotent). With the structural embedding in PVS, however, the full specification is available for analysis; [30] describe examples where PVS is used to analyze (by theorem proving and model checking) properties of tabular specifications that extend beyond simple consistency of the tables themselves. 5. Comparison, Recommendation, and Conclusion. A formal method provides guidance and discipline in the application of formal mathematics to the processes of specification, design, and implementation of software and hardware systems. Verification systems, theorem provers, and model checkers can provide mechanized support for the analysis of such formal descriptions. If we want both method and mechanization, there seem to be four basic choices. - Develop mechanized support for the chosen method from the ground up. The B tools exemplify this approach. - Develop front-end tools for the chosen method and use existing verification systems and model checkers for back-end reasoning support. For example, the front end tools may generate proof obligations that are submitted to a theorem prover. Some of the tools developed for RSML and SCR exemplify this approach. - Provide an embedding of the chosen method into the logic supported by a verification system. Embeddings of VDM in PVS and Isabelle exemplify this approach. - Add method to an existing verification system or model checker. Structural embeddings are one way to do this: we take the structural or "method" level of the language from an existing method and wrap it around the logic of a verification system (or, dually, we take an existing method and replace the "logic" level of its language by that of a verification system). The structural embedding of B in PVS by the PBS tool exemplifies this approach. The "ground up" approach potentially can deliver the most seamless integration, but incurs the very high cost of developing a customized theorem prover for the chosen method. It is not just that theorem provers are large and complex tools, and therefore expensive to develop and maintain. The largest cost is the hidden one of gaining the experience necessary to build an effective theorem prover: these systems require delicate judgments concerning how to integrate interaction and automation, how to combine rewriting and decision procedures, how to decide combinations of theories, how to integrate decision procedures with heuristics, and how to combine an expressive notation with effective deductive support. It is no accident that the most effective verification systems come from groups that have been building them for a decade or more, and that have learned from many failures. The "back-end" approach can be an effective way to discharge proof obligations, but does not allow the verification system to provide any other kind of deductive support. For example, as noted, the RSML table analyzer generates proof obligations that have been submitted to several different theorem proving components, but these tools see only the proof obligations and do not have access to the full specification. When a different kind of analysis is desired—for example, checking of invariants—then a different translator and a different back end tool (e.g., a model checker) may be required [13]. By contrast, the structural embedding of tables in PVS allows all the capabilities of PVS to be applied to the full specification, including use of its model checker to examine invariants [30]. Checking of proof obligations with a back-end tool is not without difficulties. First is the question of compatibility between the logic of the method and that of the back-end tool. The choices are between embedding the logic of the method in that of the tool, and simply replacing the former by the latter when generating proof obligations. Pratten [38] describes a tool that adopts the former approach: it generates a PVS representation of proof obligations for the B method that conform to the standard semantics of B given in [1]. The RSML table analyzer adopts the latter approach (which can also be considered a shallow embedding, since RSML specifications use a simple fragment of first order logic). Second is the issue of providing an adequate formalization of all the supporting theories required for a given specification. For example, formal analysis of a program that uses a data structure to represent a graph will require access to a formalization of some fragment of graph theory. If supporting theories are written in the notation of the formal method, then analysis will be complicated by their embedding into the language of the verification system; also, supporting theories should generally be written in a way that supports effective deduction (e.g., by presenting definitions and lemmas in a form that is convenient for rewriting), and this may be contrary to the style of the method. If the supporting theories are written directly in the language of the verification system, then the intended method is not followed to the full extent, and the specifier must master two different specification languages and styles. Traditional shallow and deep embeddings also suffer from the drawbacks just outlined. Furthermore, the difficulties of embedding a formal specification language in a different logic are greater when the full notation is to be supported, rather than just its proof obligations. Agerholm [2] describes a shallow embedding of VDM-SL into PVS that transforms VDM-SL constructs to similar PVS constructs, and Agerholm, Bicarregui and Maharaj [3] describe an extension of this approach to support refinements. Although the constructs are often similar, they are not identical, so the semantics of the VDM-SL specifications are not fully preserved by this embedding. Agerholm and Frost [4] describe an alternative embedding of VDM-SL into Isabelle; here, the semantics are preserved but the embedding is correspondingly more difficult. Whenever the notation of one method is supported by the logic and mechanization of another (whether as a back-end or by embedding), there is tension between supporting the semantics of the former vs. fully exploiting the mechanization of the latter. And if one notation is supported by more than one tool, there is the additional concern that each will provide slightly different semantics. Structural embeddings sidestep these concerns because they do not claim to preserve the full semantics of the original method. A structural embedding of VDM, for example, would be similar to the first of the two VDM embeddings mentioned above, except that the logic of VDM would be replaced by that of the verification system concerned, and a traditional embedding would be provided only for the outermost, or structural level of the VDM language (e.g., its notions of state and of refinement). Of course, the resulting system would not support true VDM any more than PBS supports true B, and this would be a fatal defect for some users. However, we believe that others will value the methodological contributions of VDM, or B, more than the idiosyncrasies of their logics and would be happy to trade those logics for others in return for better automated support of their preferred method. There are some potential difficulties, however, to this approach. In the first place, even quite good verification systems are not uniformly effective, and the encodings produced by structural embeddings may take them into areas where they perform poorly. For example, one of the proof obligations generated by the RSML checker caused PVS to go into an apparently endless computation [18] (this was a back-end application rather than a structural embedding but the problem would be the same in either case). In fact, PVS had discovered that the formula was not a propositional tautology within a couple of seconds (which is all the user wanted to know), and then spent the next several days trying to calculate a minimal set of subgoals to return to the user (there were well over 1,000). Design choices made in the expectation that the user is conducting an interactive proof of a human-generated conjecture may be inappropriate when dealing with formulas generated by mechanical translation. A related problem is that most interactive verification systems assume that a human is guiding the process, and they therefore provide only rudimentary interfaces for other programs. A deeper manifestation of the same design philosophy is the monolithic, closed nature of most verification systems: it is almost impossible for outside programs to interact with their components or to query their internal data structures, and correspondingly difficult to create customized capabilities. Our recommendation (which is hardly original) is that verification systems should be restructured into open collections of components with well-defined application programming interfaces (APIs) that allow other programs to invoke their capabilities. A cluster of components interacting through a shared intermediate language might be a suitable overall architecture. A front-end providing structural embedding for some formal method could then communicate with the verification system through its intermediate language and its APIs. Some embedding tools have already adopted a similar architecture, but with only monolithic verification systems connected to their intermediate languages. Gravell and Pratten [16] describe a tool that automates conventional embedding of a formal notation within the logic of a verification system. The tool, called JavaLIL, has been used for the embedding of Z specifications into the higher-order logics of PVS and HOL [15]. Gravell and Pratten justly bemoan difficulties caused by the monolithic, closed character of the verification systems used. In a similar vein, Jacobs et al. [21, 22] describe a tool called LOOP to support embeddings of object oriented languages in general-purpose verification systems. Structural embedding does not serve the same ends as these tools: its purpose is not to support the full language of an existing formal method, but to capture just its methodological attributes and to support those in conjunction with the language of an existing verification system. We believe that those for whom methodology and mechanized support are more important than the authentic language of a specific formal method may find that a structural embedding provides a cost-effective way to achieve their goals. Of course, structural embedding does not solve all the problems of providing effective automated support for formal methods. There is more to a method than just its deductive aspects (although deductive support is the sine qua non of truly formal methods): a fully supported method also supplies automated assistance in documentation and traceability, prototyping and code development, testing and validation, and the project management that ties all these together. We would hope that these capabilities could be created by customizing (or, if necessary, developing) generic tools that support these functions, and that such generic tools could be incorporated in the open architecture described previously. Acknowledgments. The authors would like to thank N. Shankar and the anonymous referees for constructive criticism and helpful comments. --- 4This is the approach adopted by the SAL (Symbolic Analysis Laboratory) project at SRI, Berkeley and Stanford. However, SAL is intended to promote cooperative use of complete tools such as model checkers and theorem provers, not the components of such tools; its focus is the use of abstraction in analysis of concurrent systems represented as transition systems. REFERENCES Available at http://www.staff.ecs.soton.ac.uk/~amg/javai11/efn.ps.gz. Structural embeddings: Mechanization with method We argue that methods are separable from their logics and are largely concerned with the structure and organization of specifications. We propose a technique called structural embedding that allows the structural elements of a method to be supported by a general-purpose tool, while substituting the logic of the tool for that of the method. We have found this technique quite effective and we provide some examples of its application. We also suggest how general-purpose systems could be restructured to support this activity better.
{"Source-Url": "https://ntrs.nasa.gov/archive/nasa/casi.ntrs.nasa.gov/19990064123.pdf", "len_cl100k_base": 9813, "olmocr-version": "0.1.53", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 49339, "total-output-tokens": 13924, "length": "2e13", "weborganizer": {"__label__adult": 0.00034809112548828125, "__label__art_design": 0.0004854202270507813, "__label__crime_law": 0.0003631114959716797, "__label__education_jobs": 0.00081634521484375, "__label__entertainment": 7.480382919311523e-05, "__label__fashion_beauty": 0.0001710653305053711, "__label__finance_business": 0.00025200843811035156, "__label__food_dining": 0.0003762245178222656, "__label__games": 0.0005235671997070312, "__label__hardware": 0.0009679794311523438, "__label__health": 0.0005550384521484375, "__label__history": 0.00028896331787109375, "__label__home_hobbies": 0.0001042485237121582, "__label__industrial": 0.0005650520324707031, "__label__literature": 0.0003917217254638672, "__label__politics": 0.0002961158752441406, "__label__religion": 0.0006146430969238281, "__label__science_tech": 0.05682373046875, "__label__social_life": 9.083747863769533e-05, "__label__software": 0.007152557373046875, "__label__software_dev": 0.927734375, "__label__sports_fitness": 0.0002772808074951172, "__label__transportation": 0.0006046295166015625, "__label__travel": 0.0001900196075439453}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 53445, 0.02478]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 53445, 0.50703]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 53445, 0.88224]], "google_gemma-3-12b-it_contains_pii": [[0, 311, false], [311, 311, null], [311, 3205, null], [3205, 7036, null], [7036, 11695, null], [11695, 15500, null], [15500, 18683, null], [18683, 19230, null], [19230, 19929, null], [19929, 22520, null], [22520, 25641, null], [25641, 28816, null], [28816, 31226, null], [31226, 35150, null], [35150, 39384, null], [39384, 43536, null], [43536, 46747, null], [46747, 50325, null], [50325, 52861, null], [52861, 53445, null]], "google_gemma-3-12b-it_is_public_document": [[0, 311, true], [311, 311, null], [311, 3205, null], [3205, 7036, null], [7036, 11695, null], [11695, 15500, null], [15500, 18683, null], [18683, 19230, null], [19230, 19929, null], [19929, 22520, null], [22520, 25641, null], [25641, 28816, null], [28816, 31226, null], [31226, 35150, null], [35150, 39384, null], [39384, 43536, null], [43536, 46747, null], [46747, 50325, null], [50325, 52861, null], [52861, 53445, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 53445, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 53445, null]], "pdf_page_numbers": [[0, 311, 1], [311, 311, 2], [311, 3205, 3], [3205, 7036, 4], [7036, 11695, 5], [11695, 15500, 6], [15500, 18683, 7], [18683, 19230, 8], [19230, 19929, 9], [19929, 22520, 10], [22520, 25641, 11], [25641, 28816, 12], [28816, 31226, 13], [31226, 35150, 14], [35150, 39384, 15], [39384, 43536, 16], [43536, 46747, 17], [46747, 50325, 18], [50325, 52861, 19], [52861, 53445, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 53445, 0.04811]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
54d0a73bed69a0de725d4edd0df5837929d8024a
[REMOVED]
{"Source-Url": "https://sites.cs.ucsb.edu/~chris/research/doc/dimva19_bintrimmer.pdf", "len_cl100k_base": 12168, "olmocr-version": "0.1.50", "pdf-total-pages": 20, "total-fallback-pages": 0, "total-input-tokens": 57347, "total-output-tokens": 16174, "length": "2e13", "weborganizer": {"__label__adult": 0.000354766845703125, "__label__art_design": 0.00027489662170410156, "__label__crime_law": 0.0003421306610107422, "__label__education_jobs": 0.00028204917907714844, "__label__entertainment": 5.239248275756836e-05, "__label__fashion_beauty": 0.00013244152069091797, "__label__finance_business": 0.00015473365783691406, "__label__food_dining": 0.0002849102020263672, "__label__games": 0.0005846023559570312, "__label__hardware": 0.0009002685546875, "__label__health": 0.0003445148468017578, "__label__history": 0.00018167495727539065, "__label__home_hobbies": 7.897615432739258e-05, "__label__industrial": 0.0002758502960205078, "__label__literature": 0.0001982450485229492, "__label__politics": 0.00022864341735839844, "__label__religion": 0.0003600120544433594, "__label__science_tech": 0.009307861328125, "__label__social_life": 6.401538848876953e-05, "__label__software": 0.00482177734375, "__label__software_dev": 0.97998046875, "__label__sports_fitness": 0.0002510547637939453, "__label__transportation": 0.00040841102600097656, "__label__travel": 0.0001710653305053711}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56607, 0.03652]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56607, 0.46793]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56607, 0.83686]], "google_gemma-3-12b-it_contains_pii": [[0, 2572, false], [2572, 6296, null], [6296, 8612, null], [8612, 11682, null], [11682, 13155, null], [13155, 16305, null], [16305, 19396, null], [19396, 22070, null], [22070, 25033, null], [25033, 27962, null], [27962, 30871, null], [30871, 33118, null], [33118, 34912, null], [34912, 39023, null], [39023, 41789, null], [41789, 44715, null], [44715, 47507, null], [47507, 50616, null], [50616, 54100, null], [54100, 56607, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2572, true], [2572, 6296, null], [6296, 8612, null], [8612, 11682, null], [11682, 13155, null], [13155, 16305, null], [16305, 19396, null], [19396, 22070, null], [22070, 25033, null], [25033, 27962, null], [27962, 30871, null], [30871, 33118, null], [33118, 34912, null], [34912, 39023, null], [39023, 41789, null], [41789, 44715, null], [44715, 47507, null], [47507, 50616, null], [50616, 54100, null], [54100, 56607, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56607, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56607, null]], "pdf_page_numbers": [[0, 2572, 1], [2572, 6296, 2], [6296, 8612, 3], [8612, 11682, 4], [11682, 13155, 5], [13155, 16305, 6], [16305, 19396, 7], [19396, 22070, 8], [22070, 25033, 9], [25033, 27962, 10], [27962, 30871, 11], [30871, 33118, 12], [33118, 34912, 13], [34912, 39023, 14], [39023, 41789, 15], [41789, 44715, 16], [44715, 47507, 17], [47507, 50616, 18], [50616, 54100, 19], [54100, 56607, 20]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56607, 0.05861]]}
olmocr_science_pdfs
2024-12-01
2024-12-01
19cc4c72186bbc53f5331209e95ef4a800c9ff44
A writer's collaborative assistant The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Published Version</td> <td><a href="http://doi.acm.org/10.1145/502716.502722">http://doi.acm.org/10.1145/502716.502722</a></td> </tr> <tr> <td>Citable link</td> <td><a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600">http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600</a></td> </tr> <tr> <td>Terms of Use</td> <td>This article was downloaded from Harvard University’s DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at <a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA">http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA</a></td> </tr> </tbody> </table> A Writer’s Collaborative Assistant Tamara Babaian CIS Dept., Bentley College Waltham, MA 02452 tbabaian@bentley.edu Barbara J. Grosz DEAS, Harvard University Cambridge, MA 02138 grosz@deas.harvard.edu Stuart M. Shieber DEAS, Harvard University Cambridge, MA 02138 shieber@deas.harvard.edu Abstract In traditional human-computer interfaces, a human master directs a computer system as a servant, telling it not only what to do, but also how to do it. Collaborative interfaces attempt to realign the roles, making the participants collaborators in solving the user’s problem. The paper describes Writer’s Aid, a system that deploys AI planning techniques to enable it to serve as an author’s collaborative assistant. Writer’s Aid differs from previous collaborative interfaces in both the kinds of actions the system partner takes and the underlying technology it uses to do so. While an author writes a document, Writer’s Aid helps in identifying and inserting citation keys and by autonomously finding and caching potentially relevant papers and their associated bibliographic information from various on-line sources. This autonomy, enabled by the use of a planning system at the core of Writer’s Aid, distinguishes this system from other collaborative interfaces. The collaborative design and its division of labor result in more efficient operation: faster and easier writing on the user’s part and more effective information gathering on the part of the system. Subjects in our laboratory user study found the system effective and the interface intuitive and easy to use. 1. Introduction and Motivation In traditional human-computer interfaces, a person acts as the master directing a computer-system servant. Collaborative interfaces [17] attempt to realign the roles, making the participants collaborators in solving the user’s problem. Formal models of collaboration [5, 8, 7] identify as some of the key features of a collaborative activity commitment to a shared, or joint, goal; an agreed-on division of labor; and communication between the parties to enable the satisfaction of joint goals. Whereas in a traditional interface the human user is the repository of all goals and takes all the initiative in determining ways to satisfy them, in a collaborative interface the participants establish shared goals and both take initiative in satisfying them. For example, the GLIDE system [16] is a network-diagram layout tool in which the user and the computer simultaneously and seamlessly work to satisfy the user’s layout goals. Goal-sharing is achieved by the user’s conveying layout goals through direct manipulation, and the division of labor in achieving the goals is implicit in the design of the system as a whole. Thus, a level of collaboration is achieved without explicit reasoning about goals or the state of the world. The Distributed Information Access for Learning (DIAL) system [13] provides for multi-media interactions with a complex information system; DIAL works with users to identify information relevant to their needs. The manner in which DIAL interacts collaboratively derives from the SharedPlans theory of collaboration [7]. DIAL uses explicit representations of recipes for domain actions and reasons about intentional contexts to lessen the amount of information a user needs to provide in querying the system. It demonstrates both the efficacy of deploying a model of collaboration to inform the design of a system and the system limitations that arise from limited reasoning about knowledge and actions. GLIDE and DIAL were designed to directly implement key features of a formal model of collaboration, handling various belief and intentional constructs implicitly. The formal model of collaboration is used as a design guide in the design of the system, but is not reasoned with directly. An alternative design philosophy is found in the Collagen system [14], in which the formal model is directly reasoned with, mechanisms are incorporated to manage databases of beliefs and intentions, and a recipe library of predefined plans is used. In this case, the formal model of collaboration is treated as a specification of the implementation. In this paper, we explore another part of the design space of collaborative interfaces. We describe a writer’s collaborative assistant, implemented in a system called Writer’s Aid, designed to support an author’s writing efforts by performing various bibliographic tasks that typically arise in the process of writing a research manuscript. As in GLIDE and DIAL, Writer’s Aid follows the design-guide approach. Also like earlier systems, the division of labor between the user and Writer’s Aid is predefined and constant. A distinguishing feature of Writer’s Aid is its ability to autonomously generate and execute plans to achieve goals provided by the user and adopted by the system. This autonomy, enabled by use of automated planning, also distinguishes Writer’s Aid from other collaborative interfaces with predefined recipes. It en- Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. IUI ’02, January 13-16, 2002, San Francisco, California, USA Copyright 2002 ACM 1-58113-459-2/02/0001 ...$5.00. ables Writer’s Aid to act as a robust collaborative partner, undertaking tasks in the service of a joint goal (producing a manuscript with well-formed citations) and pursuing all known avenues to accomplish those tasks. The use of planning to organize the behavior of a collaborative system is especially important in tasks for which there is more than one possible course of action and where some of the actions may unpredictably fail. Dealing with bibliographic records and papers is one such problem domain. Papers and bibliographic information are often available from multiple electronic sources such as digital libraries, author’s homepages, and on-line bibliographies. It is burdensome for a person to search systematically and thoroughly different sources to locate papers and tedious for people to compose bibliographic records. Because Internet searches are typically incomplete, many authors also must consult hard copies of journals and conference proceedings. The creation of citations is also disruptive to the writing process. Most of such work is more appropriately done by a computer system that can plan for a wide variety of approaches to data gathering and pursue them exhaustively. Similarly, many actions, such as accessing bibliographic databases or web resources, can fail (for instance, due to a server failure). In such a case, a planner can dynamically recover and replan, efficiently reusing already obtained information, until a goal is satisfied or all ways of satisfying it fail. Planning has proven advantages in the task of information integration from multiple distributed sources; it hides from the user the process of data acquisition and manipulation [1, 10]. We take this idea further and weave such information integration into an ongoing human-computer collaboration on a broader task that is the source of the information need. This setup creates advantages for both parties and thus results in more efficient overall execution of the task. The user’s simultaneous involvement in editing the paper and expertise in the particular academic field provides the computer-assistant with highly selective query terms and thus results in a high likelihood of Writer’s Aid autonomously finding the necessary information. The system’s performance of various search and formatting actions saves the writer time and effort identifying and creating bibliographic records and locating viewable versions of cited papers, enabling more efficient paper writing. Besides being a natural framework for reasoning about goals and actions, planning offers advantages from the design and implementation standpoints. The declarative nature of planning-based interfaces allows extending the system by adding new types of user goals, new information sources, and new information retrieval actions independently of the existing code. As reported by Barish et al. [3] and confirmed by our own experience with Writer’s Aid, once the planning structure is in place, designing, extending and modifying the system in response to users’ requests required relatively little effort. This flexibility ensures that with more and more specialized searchable collections appearing on the Internet, Writer’s Aid’s repertoire of available search methods and sources will be easily augmented. Initial laboratory user studies have shown Writer’s Aid meets its design goals. In particular, most subjects (like many authors who are fluent in web technologies) ordinarily perform a sequence of online searches for bibliographic information and papers similar to those done by Writer’s Aid. Even for such users, Writer’s Aid’s freeing them from doing these tasks and providing relevant information during the writing process in a timely manner was of significant help. An overwhelming majority of users found the system useful (some characterizing it as very useful), reflecting how often it was able to find papers the user intended to cite. Users found the interface intuitive and easy to learn. These results are all the more impressive because little attention was spent in fine-tuning the surface features of Writer’s Aid; for example, the tested version of Writer’s Aid did not use any advanced context-based rank-ordering of the search results. A further example of Writer’s Aid’s usefulness is the preparation of this paper: some of the references cited were identified using Writer’s Aid and some of the bibliographic records and all inline citations were done by the system. Writer’s Aid is implemented on top of Carsten Dominič’s Reftex package for the GNU Emacs editor, and the \LaTeX and Bib\LaTeX document typesetting systems. The front end is implemented in Emacs Lisp, the planner in Allegro Common Lisp, and web access in WebL [9]. Writer’s Aid is activated when the user opens a \LaTeX document in the Emacs text editor. After giving an example to illustrate the use and advantages of Writer’s Aid, the paper enumerates characteristics of the bibliographic domain and task that underlie the design choices in Writer’s Aid and then presents details of the system. The system description includes a discussion of the major issues that arise in building collaborative interfaces that utilize planning in domains with incomplete information, especially the implications for the system architecture and knowledge representation and planning methods. We briefly outline extensions to classical planning methods to meet the demands of collaborative interfaces in domains with properties like the Writer’s Aid’s. The paper then presents results of initial user studies, describes related work, and concludes with a discussion of possible future extensions to the system. 2. Overview and Example To illustrate Writer’s Aid’s functions and main features, we will explore its use in the following scenario: An author, Ed is writing a paper on collaborative interfaces. He decides to refer to Kinny et al.’s article on teamwork but he does not recall the title of the paper nor where it appeared. He does not want to interrupt his writing to locate the paper, but he does want to scan the paper once it is found to make sure his claims about it are accurate. Entering a citation command: Ed inserts a citation command with a special Emacs command. The system then prompts him to enter search parameters: keywords of the search and an indication of whether he wants only the bibliographic data on papers or the viewable versions as well. Ed enters Kinny and team as search keywords and selects the option of obtaining bibliographic records and viewable versions of relevant papers. After a citation command is issued, a label resembling the \LaTeX ordinary citation command is automatically generated and placed in the body of text. The label displays the type, keywords and status of the citation command as shown in Figure 1. The labels include the search keywords and type of search, a word indicating the status (SEARCHING OR DONE) and the number of bibliographic records and viewable papers found in reference to the particular citation command; they may be updated to reflect the most recent findings by a simple user request. While Ed continues writing (and inserting other citation commands) Writer's Aid plans and executes a search for the material he has requested. To make the search more efficient and better suited to Ed's needs, Writer's Aid limits the search for bibliographic information and papers to his preferred bibliographies and paper collections. Writer's Aid identifies preferred bibliographies semi-automatically at the time of installation by searching a user's home directory for his own bibtext files and inspecting his browser's bookmarks. At installation time, Writer's Aid has identified as Ed's preferred bibliographies his own bibtext files and two on-line scientific collections: ResearchIndex and ACM Digital Library. It constructs a plan to query Ed's preferred bibliographic collections for the list of bibliographic records of papers that are related to the keywords Kinny and team. Once Writer's Aid has collected the list of relevant paper titles from Ed's bibtext file, ResearchIndex and ACM Digital Library it attempts to locate viewable version of each identified paper. Writer's Aid's arsenal includes actions for parsing bibtext files; querying various digital repositories (currently NEC Research Institute's ResearchIndex and the ACM Digital Library) in search for papers, paper titles and authors' homepages; parsing homepages in search for papers with a given title; and downloading papers from a given URL. **Reviewing the results and selecting citation item:** To view the data that Writer's Aid has collected in response to the citation command, Ed puts the cursor at the body of the citation command and issues a command to display the search results. The list of paper titles that has been compiled is displayed in a separate window, while the following options are a single keypress away: viewing and editing the bibtext record for an item; viewing the text of the paper, if it is available; selecting an item for citation. The prompt on the bottom of the selection buffer displays a help line with the commands for each option (see Figure 1). Ed reviews the list, scanning some of the papers by issuing a view command until he identifies the paper he wants to cite, namely "Planned Team Activity". He selects this paper with a single keystroke, and Writer's Aid ensures the citation is ready for compilation, that is, the appropriate bibliographic record is inserted in the bibliography file and the key for that record is placed in the text of the paper. **3. The Citation Application Domain** The Writer's Aid application has several characteristics that influenced the design of the system architecture and its constituent knowledge representation, reasoning, and planning systems. These requirements arise from two sets of characteristics: characteristics of the interface, that is, capabilities desired in the interaction with a person, and characteristics of the domain, that is the properties of references and citations. These characteristics also appear in many other applications for which collaborative interface systems would be beneficial, and hence their effect on system design are relevant beyond this particular application. We briefly describe these characteristics and their implications for the design and implementation of the collaborative interface system. **3.1 Interface Characteristics** We discuss three interface requirements in this section, along with their implications for the implemented system. These requirements were considered in the initial design of the collaborative interface and later refined given the observations and interviews from our pilot user studies. **Anytime editing/search/access capability:** A key requirement of the interface is the seamless integration of the search and selection of papers for citation with the process of writing. A user can insert new citation commands and access possibly incomplete results of the search for any of the citation commands at any time while writing or editing a paper. To guarantee the user fast and effective access to bibliographic information for all citations, information requests arising from citation commands are processed in a round-robin fashion, working on tasks in the order of increasing complexity. For instance, querying a bibliography for relevant bibliographic records is easier and faster than searching for the viewable version of a paper. As a result, Writer's Aid first attempts to locate the bibliographic records for all citations, and postpones attempting to satisfy goals related to obtaining their viewable versions.¹ **Availability of partial results and search status:** A user can access the results of a search and make a selection at any time, even when the search has not yet completed. When using Writer's Aid, a person's primary task, and hence focus, is typically on writing the paper. As a result, users usually do not explicitly monitor the progress of the system. However, Writer's Aid informs the user of the progress of the search by updating the body of the citation command appearing in the text of the paper (see Figure 1). The display of search-status information is helpful in two ways: It enables early detection of queries that produce no matches (allowing reformulation of the citation command), and it is a way to inform users about completion status of a citation, before they start reviewing and selecting from the list of papers. **3.2 Domain Characteristics** The domain of Writer's Aid has two characteristics that directly affect the types of technology used in the underlying system, both relating to the *incompleteness* of the information possessed by the system. A major challenge to systems design is the inherent incompleteness of information about Writer's Aid's domain: bibliographic records, papers, their locations, keywords. A complete description of this domain cannot be provided a priori and can never be fully acquired. Rather, the system must be able to represent partial information and to reason about acquiring missing information that is necessary to satisfy the planning goals related to a user's citation needs. Further, Writer's Aid's domain knowledge has local incompleteness; it is incomplete even with respect to properties of the objects the system knows about. For instance, it may not know which papers have a particular keyword in their abstracts or where viewable versions of a paper are located. As a result, actions in the bibliographic domain rely heavily on information gathering to in turn affect the actions to be taken. ¹ However, a user can override this default and can focus Writer's Aid specifically on getting a particular paper by using a special immediate citation command. The search for materials related to immediate citation is not abandoned until all possibilities are attempted, that is, until all related planning goals are either satisfied or found unsatisfiable. taken subsequently. For example, the results of a query for relevant papers may determine which viewable versions of papers the system acquires. The system must therefore be able to interleave information acquisition and planning; this is a special case of interleaved planning and plan execution. Classical planning techniques are insufficient to handle these properties of the domain. To address inherent incompleteness, Writer’s Aid uses an expressive yet tractable logic, PSIPLAN\cite{2}, which allows efficient representation of incomplete information. To address local incompleteness and allow for information gathering, Writer’s Aid deploys a novel method for combining planning with execution of incomplete plans, which we call planning with hypotheticals. These important technical aspects of our solution are described in a later section. The domain characteristics interact with the interface characteristics. For instance, since Writer’s Aid begins with little knowledge about papers relevant to the user’s request, a substantial amount of information gathering may be required to satisfy a user’s requests. Because most of the information is obtained from remote sources over the Internet, it may take considerable time to identify, locate and download all of this information. On the other hand, it is very likely that the user will be satisfied with only partial results of the search, as conventional search engines often provide only partial results. To make partial results quickly available to the user (an important interface characteristic), Writer’s Aid’s design includes (i) formulation of the information request into a set of goals, processed in order of increasing likelihood of relevancy to the user, (ii) initial goal reduction to account for already available information, and (iii) round-robin processing of information requests in order of increasing search complexity. These features are described in more detail in the next sections of the paper. 4. Architecture Overview The architecture of Writer’s Aid contains the following three major components in addition to a front-end Emacs interface: - **State of Knowledge (SOK) and Goal (G) databases**: The SOK database contains Writer’s Aid’s knowledge about the user’s preferences and the world of bibliographies, papers and paper sources. The G database records the system’s goals. - **The Reasoning module (R)**: This module handles goal reduction with respect to the SOK database. The Planning Problem Manager (PPM): This module constructs and manages planning problems arising from a user’s citation requests. It includes a planning and execution module, PSIPOP-SE (PSIplan-based Partial Order Planner with Sensing and Execution), which constructs and executes individual plans. In brief, Writer’s Aid uses these components to handle a user’s citation command as follows: The command itself results in a goal being posted to the goal database G and the goal reduction module R being invoked as a separate thread. R consults the SOK database and computes the part of the goal that is already accomplished and the part that still remains to be achieved. It places the latter onto G, passing it to the planning problem manager, PPM. The PPM module creates an instance of a planning problem and hands it to the planner, PSIPOP-SE, which either constructs and executes a plan or reports failure if the planning problem is unsolvable. Upon executing the plan actions, Writer’s Aid updates the SOK database to reflect all changes in knowledge. For example, additional knowledge generated by an information-gathering action is added. Upon completion of its part, PPM removes the goals that were satisfied from the goal agenda, records the failure for the (sub)goals that PPM failed to achieve, and proceeds with the next goal. When a user issues a command to view a list of records and papers corresponding to a citation command, this information is derived from the SOK, formatted, and presented in a separate window for browsing. 4.1 SOK and Goal Formulation All of Writer’s Aid’s knowledge about the world is contained in the SOK database. As discussed above, this knowledge is assumed to be correct but incomplete. Since the system cannot have access to a complete description of the world, it must be able to effectively represent, reason, and plan with incomplete knowledge. Writer’s Aid uses the PSIPLAN language [2] which enables efficient representation of an agent’s incomplete knowledge about the world and knowledge goals and has an associated knowledge update procedure that is efficient. As described in the language specification [2], PSIPLAN entailment is sound, complete, and takes only polynomial time in the size of the agent’s SOK database. Alternative planning representations are either intractable in the general case, or, as with the tractable LCW (locally closed world) representation [6], lack completeness and sometimes discard correct information. Precision in reasoning about the world in the presence of the unknown bears directly on the ability to have non-redundancy of information gathering; it is thus especially critical for a system that uses costly (time-consuming) information-gathering actions. Incompleteness of reasoning may cause failure to construct all possible plans, which is also problematic for a collaborative agent. PSIPLAN formulas are either ground atoms over function-free terms, universally quantified negated clauses with exceptions, or knowledge propositions. For example the statement \[ \text{The only bibliographies preferred by Ed are the digital library of the ACM, and maybe the ResearchIndex database.} \] is represented in PSIPLAN by the following two propositions: 1. ACM’s digital library is a preferred bibliography, which is represented by a ground atom: \[ \text{PrefBib(ACM)} \] 2. Nothing is a preferred bibliography except for ACM and the ResearchIndex, which is expressed as the following quantified negated clause with exceptions: \[ \forall b \neg \text{PrefBib}(b) \lor b = \text{ACM} \lor b = \text{RI} \] To represent that a value of a certain proposition is known, PSIPLAN uses knowledge propositions; \( \text{KW}(\text{PrefBib}(\text{ACM})) \) denotes that the agent knows the truth value of \( \text{PrefBib}(\text{ACM}) \), that is, the agent knows whether ACM is a preferred bibliography. To represent the user’s goals, Writer’s Aid extends PSIPLAN to handle implication goals of the form \( \forall x \exists y P(x, y) \implies Q(x, y) \), where \( x \) and \( y \) are sets of variables, and both \( P \) and \( Q \) are conjunctions of atoms. A user’s request to obtain papers relevant to subject \( Y \) is formulated as the following goal: \[ \text{For each paper that is relevant to subject } Y \text{ according to some bibliography preferred by Ed, get that paper and get the bibliographic record for it.} \] This goal is instantiated as three separate PSIPLAN goal formulas. The first goal is to obtain all papers and bibliographic records of papers containing keywords \( Y \) in the title and referenced in the user’s own local bibliographic collections: \[ \forall p \exists b \text{PrefBib}(b) \land \text{LocalBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{Got}(p) \land \text{GotBib}(p) \quad (1) \] The second goal extends the first to all of the user’s preferred bibliographic collections. \[ \forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{Got}(p) \land \text{GotBib}(p) \quad (2) \] The last goal is to obtain all papers containing keywords \( Y \) in the text, rather than in the title. \[ \forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TextUses}(p, Y) \implies \text{Got}(p) \land \text{GotBib}(p) \quad (3) \] The first goal is entailed by the second, which is entailed by the third; thus, the set of papers required by the first goal is subsumed by the set of second goal’s papers, which, in turn, is subsumed by the third goal (since a title is a part of the text). However, these three goals are posted and processed in the order presented above to explicitly prioritize \[ ^2 \text{In this section, we use the following predicates: } \text{PrefBib}(b) \text{ denotes that } b \text{ is a preferred bibliography; } \text{LocalBib}(b) \text{ denotes that } b \text{ is a locally stored bibtex bibliography; } \text{InCollection}(p, b) \text{ denotes paper } p \text{ being in collection of bibliography } b; \text{TitleUses}(p, Y) \text{ denotes that keywords } Y \text{ occur in } p \text{’s title (where by title we mean a combination of the title and author names); } \text{TextUses}(p, Y) \text{ denotes that keywords } Y \text{ occur in } p \text{’s full text including the title and author fields; } \text{Got}(p) \text{ and } \text{GotBib}(p) \text{ denote, respectively, that paper } p \text{ and its bibliographic record are stored locally.} \] The only bibliographies preferred by Ed are the digital library of the ACM, and maybe the ResearchIndex database. the search for papers that are more likely to be in the desired set. Writer’s Aid is able to accomplish this incremental processing without doing redundant searches for the same information by saving in the SOK the information acquired during its attempts to satisfy the first and second goals. 4.2 Goal Reduction Once a goal is posted to the goal database $G$, the goal reduction module $R$ handles the processing of the goal. $R$ chooses a goal from $G$, reducing it with respect to the SOK, and passing it to PPM. When the planner returns, $R$ records success or failure in achieving the goal, and proceeds to the next one. For simplicity of presentation, we abbreviate a conjunction of predicates occurring in the left hand side of goals (1-3) above by a metapredicate $Rel(p, b, Y)$ to indicate that a paper $p$ is relevant to keywords $Y$ according to bibliography $b$, and drop $GotBib(p)$ from the right hand side. Thus, the goal with which we are concerned is $$g = \forall p \exists b PrefBib(b) \land Rel(p, b, Y) \implies Got(p) \quad (4)$$ To satisfy this goal, it is first necessary to find all papers that are relevant to $Y$ according to some preferred bibliography and then, for those papers only, construct a plan of obtaining them. Thus, $R$ transforms $g$ into two goals in PSIPLAN’s base language: 1. finding out the truth value of the conjunction $PrefBib(b) \land Rel(p, b, Y)$ for all possible values of $b$ and $p$, i.e. $$g_1 = \forall p \forall b KW(PrefBib(b) \land Rel(p, b, Y)),$$ and, after $g_1$ is achieved, 2. instances of $Got(p)$ corresponding to all values of $p$ for which $PrefBib(b) \land Rel(p, b, Y)$ is true. $R$ places $g_1$ as the next goal of $G$ and further reduces it with respect to SOK to identify the part that is not already known (e.g., as a result of previously executed information-gathering actions). This computation corresponds to a special PSIPLAN operation, called extended difference, denoted $\_$. Given PSIPLAN propositions $A$ and $B$, $A \_B$ is the set of propositions of $A$ that are not entailed by $B$. $R$ reduces any goal $g$ by computing the extended difference $g \_SOK$. For example, given an information goal $g_1$ and an SOK that contains information that nothing is a preferred bibliography except for possibly the ACM digital library and the ResearchIndex, $R$ deduces that the only remaining information goals are $$g_2 = \forall p KW(PrefBib(ACM) \land Rel(p, ACM, Y)),$$ $$g_3 = \forall p KW(PrefBib(RI) \land Rel(p, RI, Y)),$$ passing $g_2$ and $g_1$ to the PPM. Such reduction of $g$, if not done prior to planning, would need to be carried out while planning to achieve this goal inside the planner itself. However, in our formalism no information ever gets lost, so that such early separation of yet unknown facts from those already known is an advantage, because it identifies exactly what goal the planner is working to achieve, and the user can access that information while the planner is working on the goal. The advantage becomes even more apparent if we consider having multiple agents working to achieve the goal. In such cases, reducing the goal initially prevents redundant computation. 4.3 Managing Planning Problems Once the reduced goal is computed, it is passed to PPM, the Planning Problem Manager, which takes care of creating, prioritizing, solving, and keeping track of the status of multiple planning tasks arising from goals adopted by Writer’s Aid. PPM consists of two major components: a list of planning problems, and a planning algorithm PSIPOP-SE, which constructs solution plans for individual planning problems. When a goal is passed to PPM, a new planning problem is created and passed to PSIPOP-SE, which searches for a solution plan, and returns the result. Each planning problem is a structure that records a planning goal, its solution, and the overall status of the planning problem, which is one of open, done, unsatisfiable. Open problems are those for which the solution plan has not been found, yet the goal has not yet been found to be unsatisfiable. If a solution plan is found and successfully executed, PPM removes the planning problem from the list of open problems and places it on the done list. If a solution is found but an action execution failure occurs, the failed action instance is recorded and never used again by the planner; the planning problem remains on the open list until the planner establishes that no alternative course of action exists. Unsatisfiable problems are those that have unachievable goals. Iterative Deepening in Hypotheticals: To guarantee step-by-step processing, and availability of partial results of the search for all of the user’s requests as motivated earlier, PPM processes open problems in a round-robin fashion, gradually increasing the maximum complexity level of finding and executing the solution plan. To implement the gradual increase of solution complexity, PPM performs iterative deepening in hypotheticals. A hypothetical is a partial plan that hypothesizes on the value of an unknown proposition or subgoal. For example, having no information on the location of a paper, the planner may adopt a hypothesis that the paper is available from a certain collection, and verify the information by querying the collection. An example of a plan with two hypotheses is a plan that hypothesizes that a paper is available from the author’s homepage, and then, having no information about the author’s homepage, hypothesizes that the URL for the homepage can be found from a known index. By verifying a hypothesis via execution of a sensing action, the planner eventually collects enough information, and thus reduces the incompleteness of the knowledge enough to find a solution plan or find the goal unsatisfiable. PPM maintains a list of all open problems, processed in a loop. At each cycle of the loop PPM attempts to find a solution for each open problem in turn, increasing the maximum allowed number of hypotheses in a solution plan when necessary, and executes the plan until the processing is completed and the problem is removed from the open list. This combination of iterative deepening in hypotheticals with round-robin processing of planning problems enables effective time sharing between the user’s goals, which is necessary for providing partial results on many user requests si- multaneously, and avoiding the bottlenecks of searching for a hard to find paper, which may not be the one desired by the user. 5. Evaluation We performed a pilot study with two users, followed by a user study involving eleven subjects. Most of the subjects were Harvard University students and postdocs; eleven were computer scientists, one a physicist. Most, though not all, of the subjects were familiar with Emacs and had previously written papers using \texttt{EPiTeX} and \texttt{BibTeX}. The subjects were shown a brief, two-minute demonstration of the system; they were then given a printed tutorial$^3$ and asked to follow the steps of the tutorial. The subjects were next asked to write a paragraph or two of text in the area of their expertise involving citations, using \textit{Writer’s Aid}. All the subjects used the same local bibliography collection, which overlapped with some of the citations some subjects desired to make, but most of the bibliographic records required by the authors were dynamically collected from ResearchIndex. To our surprise, even without access to the writer’s personal \texttt{BibTeX} database, but using only ResearchIndex as another preferred bibliography and the (dynamically located) authors’ homepages in the search for papers, \textit{Writer’s Aid} was able in most cases to successfully locate at least bibilographic records for the papers. The success rate for finding viewable versions was more modest, but users still found the system very helpful. We expect a higher number of papers could be found by expanding the set of sources to include more online collections. After the test, subjects completed a questionnaire allowing freeform answers to the following questions: 1. How hard was it to learn to use the \textit{Writer’s Aid}? 2. Was it useful? Would you use it for writing papers? 3. Which modifications to the functionality/interface of \textit{Writer’s Aid} would you recommend? Some users were later interviewed to clarify their responses to Question 3. The success of \textit{Writer’s Aid} is indicated by the answers to the Question 2. To the first part “Was \textit{Writer’s Aid} useful?” the replies were: very useful (3), useful (7), moderately useful (1). To the question “Would you use it for writing papers?” ten users answered yes. (The single dissenting user explained that he would not trust any online source with his writings.) To the question How hard was it to learn to use \textit{Writer’s Aid}? 4 users answered very easy, 2 easy, and 5 reasonably easy or not hard. In response to Question 3, users suggested adding morphology-aware search, automatic spell checking of keywords, an ability to add a record to the personal bibliographic collection without citing it, and minor alterations to the window interface. We are planning to implement some of these features in the next version of \textit{Writer’s Aid}. 6. Related Work and Future Directions Research presented in this paper has connections to work in several areas, most notably AI-based collaborative interfaces, information integration systems and Internet search. Like many other information integration systems, \textit{Writer’s Aid} takes advantage of the breadth of bibliographic information available on the web. BIG [10] integrates several AI technologies, including resource-bounded planning and scheduling to conduct an offline search for information on software packages based on a client’s specification. Barish \textit{et al.} [3] report on a query-planning-based system, called TheaterLoc, that searches online movie-related databases in real time in response to users’ queries. \textit{Writer’s Aid} differs from these and other planning-based information-retrieval systems [11] in carrying out its activities in the context of collaboration with a user in the ongoing writing process, so that this writing process provides context for interpreting the information request. \textit{Writer’s Aid} is also distinguished from other planning-based information retrieval systems by the capabilities it incorporates for interleaved planning and execution, crucial for integrating information-gathering into the planning process. Collagen [15] is a middleware package based on a theory of collaboration in dialogue [12]; it provides a means for creating interfaces that participate in dialogues with users about their goals and beliefs, suggesting possible courses of action based on the available library of act recipes. Collagen does not include capabilities for automated reasoning about goal achievement beyond the use of a fixed set of recipes. Thus, it lacks \textit{Writer’s Aid}’s ability to satisfy user goals from almost any initial state using a variety of dynamically created courses of actions. Collagen’s collaborative strength is its ability to work with the user through a process, known (via a recipe library) to the system, leading to achievement of the user’s goal. The focus in \textit{Writer’s Aid} is on another system capability important for collaboration, namely, the ability to plan for and carry out autonomously a complex task that otherwise would have to be done by the human, and integrating the activities of the system-partner with those of the user in a non-intrusive and efficient manner. Other work has explored the use of context in information retrieval. Watson [4] is intended to work with its user proactively downloading and suggesting information it regards as relevant to a document that the user is currently editing or viewing. Watson creates a search query based on the text and the structure of the document, but not related to any specific user request. However, the user study of Watson [4] evaluated the utility of information provided by Watson statiscally; it did not involve the system working “alongside” a user. As a result, the appropriateness of Watson’s search results in interactive use was not evaluated in that study. In contrast, \textit{Writer’s Aid} takes seriously the fact that when users delegate to a system the task of finding information needed to complete a task (or satisfy a user’s goal), the usefulness of the system depends critically on the relevancy of the information retrieved by the system and on the results being available in a timely manner. Otherwise, the time it takes the user to sift through irrelevant information or the time spent waiting for the results may outweigh the time the user saves by not performing the search himself. These performance characteristics in \textit{Writer’s Aid} are ensured by the system adopting the precisely specified user’s search goal and using information sources that are directly related to $^3$The tutorial is available at \url{http://www.eecs.harvard.edu/~tbabaian/waid/tutor.ps}. a well defined set of data items such as papers and bibliographic records. In the future, we plan to extend Writer's Aid to incorporate the context of a citation request for more efficient search and ranking of the results. Another direction we have started to explore is adding the user as a source of information about his or her own preferences and knowledge of relevance of various online collections to the subject of a paper. Such personalization tasks can be stated declaratively via a set of knowledge goals and satisfied by an action of querying the writer, when this information becomes necessary. This representation separates personalization of the interface from its overall architecture, making it more easily adjustable. It also leads to preference elicitation that occurs within the context of a particular task. 7. Conclusion We have presented a writer's assistant system that works collaboratively with a user, achieving the necessary flexibility of behavior through explicit representation, reasoning, and planning with respect to goals and domain knowledge. Collaborativeness is embodied in the system's commitment to shared goals of producing accurate, well-formed citations; a division of labor in which each participant contributes according to natural capabilities, pursuing all known avenues and planning with respect to goals and domain knowledge. The use of planning technology to implement collaborative interfaces places new requirements on the knowledge representation and planning methods. We presented a set of extensions to classical planning representations and techniques to satisfy these requirements. In particular, the use of an expressive, yet precise and tractable formalism for knowledge representation, PSIPLAN, and the addition of hypothetical planning to integrate domain actions with sensing actions and interleaved execution, were crucial to the implementation of the collaboration. We conducted a laboratory user study to examine the effectiveness of the system. The results indicate the success of this particular interface and its implementation. Users characterized it as a useful and easy-to-learn tool that they would like to have for academic writing. 8. Acknowledgements The research reported in this paper was supported by the National Science Foundation grants IRI-9618848 and IIS-9978343 to Harvard University. The authors thank Luke Hunsberger, Wheeler Ruml and Christian Lindig for their assistance in developing the system and for helpful comments on the paper, and all participants of the user study. 9. References
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/2252600/Shieber_WritersCollaborative.pdf?isAllowed=y&sequence=2", "len_cl100k_base": 9317, "olmocr-version": "0.1.50", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28952, "total-output-tokens": 10611, "length": "2e13", "weborganizer": {"__label__adult": 0.00055694580078125, "__label__art_design": 0.003725051879882813, "__label__crime_law": 0.0005087852478027344, "__label__education_jobs": 0.08868408203125, "__label__entertainment": 0.0005946159362792969, "__label__fashion_beauty": 0.0004191398620605469, "__label__finance_business": 0.0009899139404296875, "__label__food_dining": 0.0005664825439453125, "__label__games": 0.001617431640625, "__label__hardware": 0.0014200210571289062, "__label__health": 0.0008654594421386719, "__label__history": 0.0013217926025390625, "__label__home_hobbies": 0.0003712177276611328, "__label__industrial": 0.0005698204040527344, "__label__literature": 0.0047149658203125, "__label__politics": 0.0005054473876953125, "__label__religion": 0.0009675025939941406, "__label__science_tech": 0.233154296875, "__label__social_life": 0.0010023117065429688, "__label__software": 0.1546630859375, "__label__software_dev": 0.50146484375, "__label__sports_fitness": 0.0003745555877685547, "__label__transportation": 0.0007114410400390625, "__label__travel": 0.0004336833953857422}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48619, 0.02541]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48619, 0.48175]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48619, 0.91645]], "google_gemma-3-12b-it_contains_pii": [[0, 1489, false], [1489, 7041, null], [7041, 14184, null], [14184, 21090, null], [21090, 23562, null], [23562, 30219, null], [30219, 36609, null], [36609, 43382, null], [43382, 48619, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1489, true], [1489, 7041, null], [7041, 14184, null], [14184, 21090, null], [21090, 23562, null], [23562, 30219, null], [30219, 36609, null], [36609, 43382, null], [43382, 48619, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48619, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48619, null]], "pdf_page_numbers": [[0, 1489, 1], [1489, 7041, 2], [7041, 14184, 3], [14184, 21090, 4], [21090, 23562, 5], [23562, 30219, 6], [30219, 36609, 7], [36609, 43382, 8], [43382, 48619, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48619, 0.03145]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
56af701f64d5a9a8869eadb0a9faae1f18f0f234
Workshop on Numerical & Symbolic Abstract Domains Weakly Relational Numerical Abstract Domains Theory and Application Antoine Miné École Normale Supérieure, Paris January 21st 2005 What Are Numerical Abstract Domains For? In an **Abstract Interpretation** based analyser they can: - discover properties on the **numerical** variables of a program, - **statically**, at compile-time, - **automatically**, without human interaction. **Applications of Numerical Properties:** - Check for illegal arithmetic operations: overflow, division by zero. (Ariane 5 explosion on June 4th 1996 $\rightarrow$ $500$ M loss) - Check for out-of-bound array or pointer arithmetics. (50% of Unix vulnerabilities according to CERT) - Optimisation, debugging information inference. - Parameter to non-numerical analyses. (pointer analyses [Venet], parametric predicate abstractions [Cousot], etc.) Traditional Numerical Domains Non-Relational Domains: Constant Propagation \[ X_i = c_i \] [Kildall73] Signs \[ X_i \geq 0, \; X_i \leq 0 \] [CC76] Intervals \[ X_i \in [a_i, b_i] \] [CC76] Simple Congruences \[ X_i \equiv a_i \; [b_i] \] [Granger89] Interval Congruences \[ X_i \in \alpha_i [a_i, b_i] \] [Masdupuy93] Power Analysis \[ X_i \in \alpha_i^{a_i \mathbb{Z} + b_i}, \alpha_i^{[a_i, b_i]}, \text{etc.} \] [Mastroeni01] Traditional Numerical Domains Relational Domains: Linear Equalities \[ \sum_i \alpha_{ij} X_i = \beta_j \] [Karr76] Linear Congruences \[ \sum_i \alpha_{ij} X_i \equiv \beta_j [\gamma_j] \] [Granger91] Trapezoidal Congruences \[ X_i = \sum_j \lambda_j \alpha_{ij} + \beta_j \] [Masdupuy92] Polyhedra \[ \sum_i \alpha_{ij} X_i \leq \beta_j \] [CH78] Ellipsoids \[ \alpha X^2 + \beta Y^2 + \gamma XY \leq \delta \] [Feret04] Varieties \[ P_i(\vec{X}) = 0, \ P_i \in \mathbb{R}[\mathcal{V}] \] [R-CK04] Recent Issues In Numerical Domains - Granularity in the cost vs. precision trade-off is **too coarse**. Few domains can infer **bounds**: - the interval domain is too **imprecise** (non relational) - the polyhedron domain is too **costly** (unbounded in theory, exponential in practice) \[\Rightarrow\] we can - define some **new** relational domains - **tweak** the cost vs. precision trade-off of existing domains - Relational domains are **not sound** on machine-integers & floating-point numbers! ([Simon], [Goubault and Putot] may talk about this...) The Need for Relational Domains Loop Invariant Inference Finding a non-relational property may require the inference of a relational loop invariant. Example: ```plaintext I := 10 V := 0 while • (I ≥ 0) { I := I - 1 if (random()) { V := V + 1 } } • // here I = -1 and 0 ≤ V ≤ 11 ``` The interval domain will only find V ≥ 0 and I = -1 at •. To prove that V ≤ 11, we need to prove a relational loop invariant at •: V + I ≤ 10. The Need for Relational Domains Other applications of relationality: - precise analysis of assignments and tests involving several variables, (we will see examples shortly...) - analysis of programs with *symbolic* parameters, - *modular* analysis of procedures, classes, modules, etc. - inference of *non-uniform* non-numerical invariants. (e.g., non-uniform pointer aliasing analysis [Venet]) Overview - Formal framework. - New **numerical abstract domains**: zones and octagons. - Static **variable packing** technique to cut costs. - **Symbolic manipulation** techniques to improve precision. - Adaptation to **floating-point** semantics. - Application within the **Astrée** analyser and **experimental** results. Form theory to application... and backwards. Formal Framework Language Syntax We first consider an **idealised** language: - one data-type: **scalars** in \( \mathbb{I} \), where \( \mathbb{I} \in \{ \mathbb{Z}, \mathbb{Q}, \mathbb{R} \} \), - no procedure, - a **finite, fixed** set of variables: \( \mathcal{V} \). ### Instructions \[ \mathcal{I} ::= \begin{align*} X & \leftarrow E & \text{assignment to } X \in \mathcal{V} \\ E & \bowtie 0 \ ? & \text{test } \bowtie \in \{ =, \leq, \ldots \} \end{align*} \] ### Expressions \[ \mathcal{E} ::= \begin{align*} [a, b] & \text{ interval } a \in \mathbb{I} \cup \{ -\infty \}, \ b \in \mathbb{I} \cup \{ +\infty \}, \\ X & \text{ variable } X \in \mathcal{V} \\ -\ E & \text{ unary operator} \\ E \odot E & \text{ binary operators } \odot \in \{ +, \times, \ldots \} \end{align*} \] **Notes:** - \([a, b]\) models a **non-deterministic** choice within an interval, - adaptation to machine-integers and floating-point variables will come later, - other language features are orthogonal. Concrete Semantics **Environments**: maps $\rho \in (\mathcal{V} \rightarrow \mathbb{I})$. **Expression Semantics**: $\llbracket E \rrbracket : (\mathcal{V} \rightarrow \mathbb{I}) \rightarrow \mathcal{P}(\mathbb{I})$ $E$ maps **environments** to **sets** of numerical values: \[ \llbracket [a, b] \rrbracket(\rho) \overset{\text{def}}{=} \{ c \in \mathbb{I} | a \leq c \leq b \} \\ \llbracket X \rrbracket(\rho) \overset{\text{def}}{=} \{ \rho(X) \} \\ \llbracket e_1 + e_2 \rrbracket(\rho) \overset{\text{def}}{=} \{ v_1 + v_2 | v_1 \in \llbracket e_1 \rrbracket(\rho), v_2 \in \llbracket e_2 \rrbracket(\rho) \} \\ \llbracket e_1 / e_2 \rrbracket(\rho) \overset{\text{def}}{=} \{ v_1 / v_2 | v_1 \in \llbracket e_1 \rrbracket(\rho), v_2 \in \llbracket e_2 \rrbracket(\rho) \setminus \{0\} \} \\ \] etc. There is no error state: run-time errors **halt** the program and are not propagated. Concrete Semantics Instruction Semantics: \( \{ I \} : \mathcal{P}(V \rightarrow I) \rightarrow \mathcal{P}(V \rightarrow I) \) A transfer function defines a relation between environments: - Assignments: \[ \{ X \leftarrow e \}(R) \overset{\text{def}}{=} \{ \rho[ X \rightarrow v ] \mid \rho \in R, \ v \in \llbracket e \rrbracket(\rho) \} \] - Tests: filter environments \[ \{ e \triangleright 0 ? \}(R) \overset{\text{def}}{=} \{ \rho \in R \mid \exists v \in \llbracket e \rrbracket(\rho) \text{ such that } v \triangleright 0 \} \] - Backwards assignments: \[ \{ X \rightarrow e \}(R) \overset{\text{def}}{=} \{ \rho \mid \exists v \in \llbracket e \rrbracket(\rho), \ \rho[ X \leftarrow v ] \in R \} \] useful to - refine abstract semantics by backwards / forward iterations, - perform abstract debugging. Concrete Semantics Given a control-flow graph \((L, e, I)\): \[ \begin{array}{l} L & \text{program points} \\ e & \in L & \text{entry point} \\ I & \subseteq L \times I \times L & \text{arcs} \end{array} \] we seek to compute the **reachability semantics**, the smallest solution of: \[ \mathcal{X}_l = \begin{cases} (V \rightarrow \mathbb{I}) & \text{if } l = e \\ \bigcup_{(l',i,l) \in I} \{ i \} (\mathcal{X}_{l'}) & \text{if } l \neq e \end{cases} \] that gathers all possible environments at each program point. **Problem:** This is **not computable** in general. \(\implies\) we will compute **sound over-approximations** of the \(\mathcal{X}_l\)\ldots Abstract Domains: Formal Definition We will work in the Abstract Interpretation framework, a general theory of sound approximations of semantics [Cousot78]. Numerical Abstract Domain: - **computer-representable** set $\mathcal{D}^\#$ of abstract values, together with: - a *concretisation*: $\gamma: \mathcal{D}^\# \rightarrow \mathcal{P}(\mathcal{V} \rightarrow \mathbb{I})$, - a *partial order*: $\sqsubseteq^\#, \bot^# , \top^#$, - sound, effective abstract transfer functions $\{ I \}^\#$: $(\{ I \} \circ \gamma)(\mathcal{X}^\#) \subseteq (\gamma \circ \{ I \}^\#)(\mathcal{X}^\#)$, - a sound, effective abstract union $\cup^\#$: $\gamma(\mathcal{X}^\#) \cup \gamma(\mathcal{Y}^\#) \subseteq \gamma(\mathcal{X}^\# \cup^# \mathcal{Y}^\#)$, - effective extrapolation operators $\nabla$, $\triangle$ if $\mathcal{D}^\#$ has infinite chains. $\implies$ we can perform a reachability analysis in $L \rightarrow \mathcal{D}^\#$ soundly. The Zone Abstract Domain The Zone Abstract Domain Less expressive but simpler than the octagon domain. Zones enrich intervals with invariants of the form: \[ \bigwedge_{i,j} (V_i - V_j \leq c_{ij}) \quad c_{ij} \in \mathbb{I} \] The zone abstract domain features: - a precision between the interval and polyhedron domains; relational invariants, - a quadratic memory cost and cubic worst-case time cost. Zones are used in the model-checking of timed automata and Petri nets but they need many new abstract operators to suit Abstract Interpretation needs. Zone Representation **Difference Bound Matrices:** (DBMs) - matrix of size \((n + 1) \times (n + 1)\) with elements in \(\mathbb{I} \cup \{+\infty\}\): - \(m_{ij} \neq +\infty\) is an upper bound for \(V_j - V_i\), - \(m_{ij} = +\infty\) means that \(V_j - V_i\) is unbounded, - \(m_{i0}, m_{0j}\) encode unary constraints: \(-V_i \leq m_{i0}, V_j \leq m_{0j}\), - \(\gamma(m) \stackrel{\text{def}}{=} \{ (v_1, \ldots, v_n) \in \mathbb{I} | \forall i, j, v_j - v_i \leq m_{ij}, v_0 = 0 \}\), - \(m\) is the adjacency matrix of a **weighted directed graph**: \(V_i \xrightarrow{m_{ij}} V_j\). **Example:** ![Diagram](image.png) *NSAD'05 - Weakly Relational Numerical Abstract Domains - Antoine Miné* Order Structure The total order on $\mathbb{I}$ is extended to $\mathbb{I} \defeq \mathbb{I} \cup \{+\infty\}$. The total order on $\mathbb{I}$ is extended to a partial order on $\mathcal{D}^\#$: - $\mathbf{m} \sqsubseteq^\mathbf{#} \mathbf{n}$ \iff $\forall i, j, \mathbf{m}_{ij} \leq \mathbf{n}_{ij}$ \quad \text{point-wise partial order} - $[\mathbf{m} \sqcap^\mathbf{#} \mathbf{n}]_{ij} \defeq \min(\mathbf{m}_{ij}, \mathbf{n}_{ij})$ \quad \text{greatest lower bound} - $[\mathbf{m} \sqcup^\mathbf{#} \mathbf{n}]_{ij} \defeq \max(\mathbf{m}_{ij}, \mathbf{n}_{ij})$ \quad \text{least upper bound} - $[\top^\mathbf{#}]_{ij} \defeq +\infty$ \quad \text{greatest element} However: - $\mathbf{m} \sqsubseteq^\mathbf{#} \mathbf{n} \implies \gamma(\mathbf{m}) \subseteq \gamma(\mathbf{n})$ but not the converse, - $\mathbf{m} = \mathbf{n} \implies \gamma(\mathbf{m}) = \gamma(\mathbf{n})$ but not the converse: $\gamma$ is not injective! $\implies$ we introduce a normal form. Normal Form **Idea:** Derive *implicit* constraints by summing weights on adjacent arcs: \[ \begin{align*} V_1 - V_2 &\leq 3 \\ V_2 - V_3 &\leq -1 \\ V_1 - V_3 &\leq 4 \end{align*} \] \[ \begin{array}{c} \text{e.g. } \\ \begin{mymatrix} V_1 & V_2 & V_3 \\ V_3 & -1 & 3 & 4 \\ V_2 & 1 & -1 & \end{mymatrix} \Rightarrow \\ \begin{mymatrix} V_1 & V_2 & V_3 \\ V_3 & -1 & 3 & 2 \\ V_2 & 1 & -1 & \end{mymatrix} \end{array} \] \[ \begin{align*} V_1 - V_2 &\leq 3 \\ V_2 - V_3 &\leq -1 \\ V_1 - V_3 &\leq 2 \end{align*} \] **Shortest-Path Closure** \(m^*\): Floyd–Warshall algorithm: \[ \begin{align*} \begin{cases} m_{i,j}^* &\overset{\text{def}}{=} m_{i,j}^{n+1} \\ m_{i,j}^0 &\overset{\text{def}}{=} m_{i,j} \\ m_{i,j}^{k+1} &\overset{\text{def}}{=} \min(m_{i,j}^k, m_{i,k}^k + m_{k,j}^k) & \text{if } 0 \leq k \leq n \end{cases} \end{align*} \] - derives all implicit constraints in cubic time, - gives a normal form when \(\gamma(m) \neq \emptyset\): \(m^* = \inf \subseteq \{ n \mid \gamma(n) = \gamma(m) \} \), - enables emptiness testing: \(\gamma(m) = \emptyset \iff \exists i, m_{i,i}^* < 0\), - enables inclusion testing: \(\gamma(m) \subseteq \gamma(n) \iff m^* \sqsubseteq n^*\), etc. Operator Example: Abstract Union The union of two zones is not always a zone: \[ \begin{align*} \begin{array}{c} \text{Zone 1} \\ \text{Zone 2} \end{array} \rightarrow \begin{array}{c} \text{Union of Zones 1 and 2} \end{array} \end{align*} \] \(\square\) is a sound counterpart for \(\sqcup\): \(\gamma(m) \cup \gamma(n) \subseteq \gamma(m \sqcup n)\). But it may not output the smallest zone encompassing two zones... . . . because of implicit constraints. Solution: Define \(m \sqcup n \triangleq m^* \sqcup n^*\): - always the best abstraction: \(\gamma(m \sqcup n) = \inf \{ \gamma(o) | \gamma(m), \gamma(n) \subseteq \gamma(o) \} \) - \(m \sqcup n\) is already closed: \((m \sqcup n)^* = m \sqcup n\) Note: The intersection \(\sqcap\) behaves differently (dually). Operator Example: Abstract Assignment We propose several operators with varying cost versus precision trade-offs. **Exact Assignments:** Only for $X \leftarrow Y + [a, b]$, $X \leftarrow X + [a, b]$, or $X \leftarrow [a, b]$. \[ \text{e.g. } \#(m)_{ij} \overset{\text{def}}{=} \begin{cases} -a & \text{if } i = j_0 \text{ and } j = i_0, \\ b & \text{if } i = i_0 \text{ and } j = j_0, \\ +\infty & \text{otherwise if } i = j_0 \text{ or } j = j_0, \\ \end{cases} \] **Interval and Polyhedra Based Assignments** We can reuse existing transfer functions from other abstract domains using: - exact conversion operators: intervals $\rightarrow$ zones $\rightarrow$ polyhedra, - best conversion operators: polyhedra $\rightarrow$ zones $\rightarrow$ intervals. (using $\ast$) **e.g.** - best abstract assignment for linear expressions using polyhedra, - fast assignment of arbitrary expressions using intervals. **Operator Example: Abstract Assignment** **Problem:** for many usual assignments, e.g., \( X \leftarrow Y + Z \): - there is no exact abstraction, - the interval-based assignment is very imprecise, (not relational enough) - the polyhedron-based assignment is too costly, (exponential cost) (LP as in [Sankaranarayanan et al.] may solve this problem...) \[ \implies \text{we introduce an operator with intermediate cost versus precision.} \] **Interval Linear Form Assignments:** \( V_j \leftarrow [a_0, b_0] + \sum_k ([a_k, b_k] \times V_k) \) For each \( i \), derive new bounds on \( V_j - V_i \) by evaluating: \[ [a_0, b_0] + \sum_{k \neq i} ([a_k, b_k] \times \pi_k(\mathcal{X}^\#)) + ([a_i - 1, b_i - 1] \times \pi_i(\mathcal{X}^\#)) \] using the **interval** operators \(+, \times\), and the interval projections \( \pi_k \) of variables \( V_k \). \[ \implies \text{we can infer relational invariants for a linear cost.} \] Not optimal because we do not use the relational information in the zone. Operator Example: Abstract Assignment Precision Comparison: Argument \[ \begin{align*} 0 & \leq Y \leq 10 \\ 0 & \leq Z \leq 10 \\ 0 & \leq Y - Z \leq 10 \end{align*} \] \[\Downarrow \quad X \leftarrow Y - Z\] \[ \begin{align*} -10 & \leq X \leq 10 \\ -20 & \leq X - Y \leq 10 \\ -20 & \leq X - Z \leq 10 \end{align*} \] - Interval-based \[ \begin{align*} -10 & \leq X \leq 10 \\ -10 & \leq X - Y \leq 0 \\ -10 & \leq X - Z \leq 10 \end{align*} \] - Interval linear form based \[ \begin{align*} 0 & \leq X \leq 10 \\ -10 & \leq X - Y \leq 0 \\ -10 & \leq X - Z \leq 10 \end{align*} \] - Polyhedron-based (best) Full analysis examples will be presented shortly — within the octagon domain. Operator Example: Widening The zone abstract domain has infinite strictly increasing chains! We need a **widening** $\nabla$ to compute fixpoints in finite time: $$ \begin{align*} X_0^\# & \overset{\text{def}}{=} Y_0^\# \\ X_i^\# & \overset{\text{def}}{=} X_i^\# \nabla Y_{i+1}^\# \end{align*} $$ should converge in **finite time** towards an over-approximation of $\bigcup_i \gamma(Y_i^\#)$ Example Widenings: Point-wise extensions of interval widenings: - **standard widening**: throw away unstable constraints $$ (m \nabla n)_{ij} \overset{\text{def}}{=} \begin{cases} m_{ij} & \text{if } m_{ij} \geq n_{ij} \\ +\infty & \text{otherwise} \end{cases} $$ - **widening with thresholds** $T$ ($T$ is a finite set) $$ (m \nabla n)_{ij} \overset{\text{def}}{=} \begin{cases} m_{ij} & \text{if } m_{ij} \geq n_{ij} \\ \min \left\{ x \in T \cup \{+\infty\} \mid x \geq n_{ij} \right\} & \text{otherwise} \end{cases} $$ Operator Example: Widening **Important Note:** \[ X_i^{##} \overset{\text{def}}{=} (X_i^{###}) \nabla Y_i^{##+1} \text{ may diverge!} \] This is because: - \( \nabla \) termination is enforced by setting coefficients to \( +\infty \) - \( \ast \) **tightens** \( +\infty \) **coefficients** into finite ones This is very unlike other operators \( \sqcup^{##}, \lbrack \cdot \rbrack^{##}, \) etc., that **benefit** from closure. ([Sankaranarayanan et al.] avoid refining widened constraints) ([Bagnara et al.] may have another answer. . . ) **Semantical Widening:** **Open problem:** find a widening independent from the chosen DBM representation. (c.f., polyhedron widening) The Octagon Abstract Domain The Octagon Abstract Domain Octagons extend zones to invariants of the form: \[ \bigwedge_{i,j} \left( \pm V_i \pm V_j \leq c_{ij} \right) \quad \mathbb{I} \in \{ \mathbb{Z}, \mathbb{Q}, \mathbb{R} \} \] - Strictly **more expressive** than the zone domain. - Same asymptotic cost: **quadratic** in memory and **cubic** in time. - Precise enough to analyse our loop example. (and more...) Octagon Representation We still use DBMs! **Idea:** Rewrite octagonal constraints as potential constraints on $\mathcal{V}' \stackrel{\text{def}}{=} \{V'_1, \ldots, V'_{2n}\}$. - $V'_{2k-1}$ represents $+V_k$ - $V'_{2k}$ represents $-V_k$ <table> <thead> <tr> <th>the constraint</th> <th>is represented by</th> </tr> </thead> <tbody> <tr> <td>$V_i - V_j \leq c$ \hspace{1cm} ($i \neq j$)</td> <td>$V'<em>{2i-1} - V'</em>{2j-1} \leq c$ and $V'_j - V'_i \leq c$</td> </tr> <tr> <td>$V_i + V_j \leq c$ \hspace{1cm} ($i \neq j$)</td> <td>$V'<em>{2i-1} - V'</em>{2j} \leq c$ and $V'_j - V'_i \leq c$</td> </tr> <tr> <td>$-V_i - V_j \leq c$ \hspace{1cm} ($i \neq j$)</td> <td>$V'<em>{2i} - V'</em>{2j-1} \leq c$ and $V'_j - V'_i \leq c$</td> </tr> <tr> <td>$V_i \leq c$</td> <td>$V'<em>{2i-1} - V'</em>{2i} \leq 2c$</td> </tr> <tr> <td>$V_i \geq c$</td> <td>$V'<em>i - V'</em>{2i-1} \leq -2c$</td> </tr> </tbody> </table> **Adapted Concretisation:** of a DBM $m$ of size $2n \times 2n$ $$\gamma(m) \stackrel{\text{def}}{=} \{ (v_1, \ldots, v_n) \mid \forall i, j, v'_j - v'_i \leq m_{ij}, v'_{2i-1} = -v'_{2i} = v_i \}$$ **Octagon Representation** **Coherence:** One octagon constraint can have two encodings. We require the two encodings to represent the same constraint: \[ \forall i, j, m_{ij} = m_{\overline{i} \overline{j}} \text{ where } \overline{i} \overset{\text{def}}{=} \begin{cases} i - 1 & \text{if } i \text{ is even} \\ i + 1 & \text{if } i \text{ is odd} \end{cases} \] **Octagon Example:** \[ \begin{align*} V_1 + V_2 &\leq 3 \\ V_2 - V_1 &\leq 3 \\ V_1 - V_2 &\leq 3 \\ -V_1 - V_2 &\leq -3 \\ 2V_2 &\leq 2 \\ -2V_2 &\leq 8 \end{align*} \] Adapted Normal Form The shortest-path closure is not a normal form. We must take into account the implicit constraints $V'_{2i-1} + V'_{2i} = 0$. **Strongly Closed DBM:** when $\not\in \mathbb{Z}$ - $\forall i, j, k, \ m_{ij} \leq m_{ik} + m_{kj}$ (closed by transitivity) - $\forall i, j, \ m_{ij} \leq (m_{ii} + m_{jj})/2$ (closed by addition of unary constraints) **Properties:** - Each constraint in a strongly closed DBM is saturated. - There is a unique strongly closed DBM representing a non-empty octagon. - We can construct complete equality and inclusion tests. - We can construct best, exact operators. Adapted Normal Form Modified Floyd–Warshall Algorithm $m^\bullet$: when $\mathbb{I} \neq \mathbb{Z}$ we define: $$ \begin{align*} m^\bullet & \overset{\text{def}}{=} m^n \\ m^0 & \overset{\text{def}}{=} m \\ m^{k+1} & \overset{\text{def}}{=} S(C^{2k+1}(m^k)) \text{ if } 0 \leq k < n \end{align*} $$ where: $$ (S(n))_{ij} \overset{\text{def}}{=} \min(n_{ij}, (n_{i\overline{j}} + n_{\overline{i}j})/2) $$ $$ (C^k(n))_{ij} \overset{\text{def}}{=} \min(n_{ij}, n_{ik} + n_{kj}, n_{i\overline{k}} + n_{\overline{k}j}, n_{ik} + n_{k\overline{k}} + n_{\overline{k}j}, n_{i\overline{k}} + n_{k\overline{k}} + n_{kj}) $$ Properties: - Emptiness test: $\gamma(m) = \emptyset \iff \exists i, m^n_{ii} < 0$. - If $\gamma(m) \neq \emptyset$, $m^\bullet$ is strongly closed. - $m^\bullet$ can be computed in cubic time. - All operators are constructed as in the zone domain, using $\bullet$ instead of $\ast$. NSAD’05 - Weakly Relational Numerical Abstract Domains - Antoine Miné Integer Case The case $\mathbb{I} = \mathbb{Z}$ is more complex! The strong closure is not sufficient to provide the saturation. . . **Theoretical Solution:** Tight closure, proposed by [Harvey and Stuckey 97], can be used: - $\forall i, j, k, \ m_{ij} \leq m_{ik} + m_{kj}$ - $\forall i, j, \ m_{ij} \leq (m_{i\bar{i}} + m_{\bar{j}j})/2$ - $\forall i, \ m_{i\bar{i}}$ is even (tightness) Unfortunately, the normalisation algorithm runs in $O(n^4)$ . . . **Open problem:** is there a $O(n^3)$ normalisation algorithm? **Practical Solution:** We use the strong closure and abandon the completeness, best, exactness results. In practice, we are precise enough. Octagon Analysis Example: Absolute Value \[ X \leftarrow [-100, 100] \] 1. \[ Y \leftarrow X \] 2. \[ \text{if } Y \leq 0 \{ \ 3. \ Y \leftarrow -Y \ 4. \ \text{} \} \ \text{else} \{ \ 5. \ \text{} \} 6. \[ \text{if } Y \leq 69 \{ \ 7. \ \cdots X \cdots \ \} \] The octagon domain can prove that, at 7, \[ -69 \leq X \leq 69. \] 1. \[ -100 \leq X \leq 100 \] 2. \[ -100 \leq X \leq 100 \ \land \ -100 \leq Y \leq 100 \ \land \ X - Y = 0 \ \land \ -200 \leq X + Y \leq 200 \] 3. \[ -100 \leq X \leq 0 \ \land \ -100 \leq Y \leq 0 \ \land \ X - Y = 0 \ \land \ -200 \leq X + Y \leq 0 \] 4. \[ -100 \leq X \leq 0 \ \land \ 0 \leq Y \leq 100 \ \land \ -200 \leq X - Y \leq 0 \ \land \ X + Y = 0 \] 5. \[ 0 \leq X \leq 100 \ \land \ 0 \leq Y \leq 100 \ \land \ X - Y = 0 \ \land \ 0 \leq X + Y \leq 200 \] 6. \[ -100 \leq X \leq 100 \ \land \ 0 \leq Y \leq 100 \ \land \ -200 \leq X - Y \leq 0 \ \land \ 0 \leq X + Y \leq 200 \] 7. \[ -69 \leq X \leq 69 \ \land \ 0 \leq Y \leq 69 \ \land \ -138 \leq X - Y \leq 0 \ \land \ 0 \leq X + Y \leq 138 \] We require bounds on both \[ X - Y \] and \[ X + Y \]! Octagon Analysis Example: Rate Limiter \[ \begin{aligned} Y & \leftarrow 0 \\ \text{while } & \text{ random}() \{ \\ & X \leftarrow [-128, 128] \\ & D \leftarrow [0, 16] \\ & S \leftarrow Y \\ & R \leftarrow X - S \\ & Y \leftarrow X \\ & \text{if } R \leq -D \{ \ Y \leftarrow S - D \} \text{ else} \\ & \text{if } D \leq R \{ \ Y \leftarrow S + D \} \\ \} \end{aligned} \] \(Y\) is compelled to follow \(X\) and not change too rapidly: we have \(Y \in [-128, 128]\). - The octagon domain can prove that \(|Y| \leq M\) is \textbf{stable} at \(\textcircled{1}\), if \(M \geq 144\). We need the \textbf{widening with thresholds} and \textbf{interval linear form assignment}. - The polyhedron domain \textbf{can} prove that \(|Y| \leq 128\). - The interval domain \textbf{cannot} prove any bound to be stable. Static Variable Packing Static Variable Packing Problem: Even a quadratic / cubic cost may be too costly in practice. Solution: Do not relate all the variables together! - Split $\mathcal{V}$ into packs $\mathcal{V}_1, \ldots, \mathcal{V}_m \subseteq \mathcal{V}$. - Associate one relational element per $\mathcal{V}_i$. We are relational inside packs and non-relational between packs. - For flexibility, a variable may appear in several packs. The packing defines a precision versus cost trade-off. The packing is static. Static Variable Packing Operator Adaptation - Union, intersection, widening, etc. are defined **point-wisely**. - For assignments, tests: - for each pack $\mathcal{V}_i$ - **project** the expression on the set of variables $\mathcal{V}_i$ - *(e.g., by replacing variables into intervals)* - apply the transfer function on $\mathcal{V}_i$ **Note:** We could perform inter-packing reduction using common variables as pivot... ... but we prefer to adapt the packing. (more predictable cost) **Cost** The cost depends only on: - The size of each pack $|\mathcal{V}_1|, \ldots, |\mathcal{V}_m|$. - The number of packs each variable appears in. $\Longrightarrow$ It is interesting when there are **many small packs**. Problem: How to determine a good packing $\mathcal{V}_1, \ldots, \mathcal{V}_m$? Some ideas: - Rely on variable scope. (does not help when many globals) - Rely on variable occurring simultaneously in expressions. - Rely on a previous analysis. (packing optimisation) This must be done on a per programming style basis! More of this when we will talk about Astrée... Symbolic Enhancement Methods Core Principle **Idea:** Replace expressions with nicer ones on the fly. Suppose that $\forall \rho \in \gamma(\mathcal{X}^\#), \llbracket e \rrbracket(\rho) \subseteq \llbracket e' \rrbracket(\rho)$, then: $$(\{ V \leftarrow e \} \circ \gamma)(\mathcal{X}^\#) \subseteq (\gamma \circ \{ V \leftarrow e' \})(\mathcal{X}^\#)$$ $\implies$ we can safely use $\{ V \leftarrow e' \}(\mathcal{X}^\#)$ in place of $\{ V \leftarrow e \}(\mathcal{X}^\#)$. The same holds for tests and backward assignments. **Example Application:** - Replace a non-linear assignment by a linear one. If $X \in [0, 1]$ in $\mathcal{X}^\#$, we replace $\{ V \leftarrow X \times Y \}(\mathcal{X}^\#)$ with $\{ V \leftarrow [0, 1] \times Y \}(\mathcal{X}^\#)$. **Note:** Interactions between numerical abstract values $\mathcal{X}^\#$ and expression transformations. ($\neq$ performing a static program transformation before the analysis) Linearisation **Goal:** Put arbitrary expressions to the form: \( [a_0, b_0] + \sum_k ([a_k, b_k] \times V_k) \). **Interval Linear Form Manipulations:** Resemble a vector space structure. - \(((a_0, b_0] + \sum_k [a_k, b_k] \times V_k) + ([a_0', b_0'] + \sum_k [a_k', b_k'] \times V_k)\) def \(= (([a_0, b_0] + [a_0', b_0']) + \sum_k ([a_k, b_k] + [a_k', b_k']) \times V_k)\) - \([a, b] \times ([a_0, b_0] + \sum_k [a_k, b_k] \times V_k)\) def \(= ([a, b] \times [a_0', b_0']) + \sum_k ([a, b] \times [a_k', b_k']) \times V_k\) - \(\iota\left([[a_0, b_0] + \sum_k [a_k, b_k] \times V_k, x^\#]\right)\) def \(= [a_0, b_0] + \sum_k ([a_k, b_k] \times \pi_k(x^\#))\) (on-the-fly intervalisation) We use interval arithmetics +, \times, and the interval projection \(\pi_k\). Linearisation **Linearising an expression:** \( (e) \) defined by structural induction: - \( (V_i)(X\#) \overset{\text{def}}{=} [1, 1] \times V_i \) - \( (e_1 + e_2)(X\#) \overset{\text{def}}{=} (e_1)(X\#) \times (e_2)(X\#) \) - \( (e_1 \times e_2)(X\#) \overset{\text{def}}{=}[a, b] \times (e_2)(X\#) \) when \( (e_1)(X\#) = [a, b] \) - \( (e_1 \times e_2)(X\#) \overset{\text{def}}{=}[a, b] \times (e_1)(X\#) \) when \( (e_2)(X\#) = [a, b] \) - \( (e_1 \times e_2)(X\#) \overset{\text{def}}{=} \nu((e_1)(X\#), X\#) \times (e_2)(X\#) \) \text{ or } \nu((e_2)(X\#), X\#) \times (e_1)(X\#) \) In **non-linear multiplication**: we must **choose** whether to intervalise \( e_1 \) or \( e_2 \). **Example:** intervalise the expression with smallest bounds \[ X \in [0, 1], \ Y \in [-10, 10] \implies (X \times Y)(X\#) = [0, 1] \times Y \] Linearisation Applications: - Interval domain: linearisation provides **simplification for free**. Example: \((X + Y) - X\) where \(X, Y \in [0, 1]\). - without linearisation: \(\langle (X + Y) - X \rangle^\#(X^\#) = [-1, 2]\), - with linearisation: \(\langle (X + Y) - X \rangle(\langle X^\# \rangle)^\#(X^\#) = \langle Y \rangle^\#(X^\#) = [0, 1]\). - Octagon domain: we can use our interval linear form transfer functions. - We can abstract further into expressions of the form: \([a_0, b_0] + \sum_k c_k \times V_k\). This can be fed to the polyhedron domain. The result greatly **depends on the chosen multiplication strategy**! **Open problem:** find strategies with **theoretical precision guarantees**. Symbolic Constant Propagation Example: \( X \leftarrow Y + Z; \ U \leftarrow X - Z \) \( \{ U \leftarrow X - Z \}^\# \) is replaced with \( \{ U \leftarrow (Y + Z) - Z \}^\# \) . . . . . . which is linearised into \( \{ U \leftarrow Y \}^\# \). Technique: \( X^\# \in D^\# \) is enriched with a map \( S^\# \in (V \rightarrow E) \). - Abstract elements \( \langle X^\#, S^\# \rangle \) now represent: \[ \gamma \langle X^\#, S^\# \rangle \overset{\text{def}}{=} \{ \rho \in \gamma(X^\#) \mid \forall i, \; \rho(V_i) \in \llbracket S^\#(V_i) \rrbracket(\rho) \}. \] - Abstract assignments \( \{ X \leftarrow e \}^\# \langle X^\#, S^\# \rangle \) - propagate \( S^\# \) into \( e \) to get \( e' \) and evaluate \( X'^\# \overset{\text{def}}{=} \{ X \leftarrow (\llbracket e' \rrbracket(X^\#)) \}^\#(X^\#) \), - kill information on \( X \) in \( S^\# \), then add \( X = e \). Note: We must choose how far to propagate. Adaptation to Floating-Point IEEE 754-1985 Floating-Point Numbers: We consider the **IEEE 754-1985 norm** because: - it is widely implemented in today’s hardware (Intel, Motorola), - it is supported by the C language (and many others). **Example: 32-bit “single precision” float numbers** ![Sign Exponent e Fraction b](image) The set $\mathbb{F}$ of floats is composed of: - **normalised** numbers: $(-1)^s \times 2^{e-127} \times 1.b_1 \cdots b_{23}$ \hspace{1cm} ($1 \leq e \leq 254$) - **denormalised** numbers: $(-1)^s \times 2^{e-126} \times 0.b_1 \cdots b_{23}$ \hspace{1cm} ($e = 0, b \neq 0$) - **signed zeros**: $+0$ and $-0$ - **infinities and error codes**: $+\infty$, $-\infty$, $NaN$ IEEE 754-1985 Arithmetics Floating-Point Expressions $\mathcal{E}_f$: $$\mathcal{E}_f ::= [a, b] \quad \text{interval } a, b \in \mathbb{F}$$ $$\quad \quad X \quad \text{variable } X \in \mathcal{V}$$ $$\quad \quad \ominus \mathcal{E}_f \quad \text{unary operator}$$ $$\quad \quad \mathcal{E}_f \odot \mathcal{E}_f \quad \text{binary operators } \odot \in \{\oplus, \otimes, \ldots\}$$ Floating-Point Arithmetics: Differences between floating-point and $\mathbb{Q}, \mathbb{R}$ arithmetics: - **rounding** to a representable float occurs, several types of rounding: *towards* $+\infty$, $-\infty$, 0 or *to nearest*. - **overflow**: large numbers, division by 0 generate $+\infty$ or $-\infty$, - **underflow**: small numbers round to $+0$ or $-0$, - **invalid operations**: $0/0$, $(+\infty) + (-\infty)$, etc. generate $NaN$. NSAD’05 - Weakly Relational Numerical Abstract Domains - Antoine Miné 40/53 Chosen Floating-Point Semantics Restrict to programs that use $\mathbb{F}$ as “approximated reals”: - **Rounding** and **underflow** are **benign**, but we must consider all rounding directions! - **Overflow** and **invalid operations** result in a **run-time error** $\Omega$. $\implies$ Error-free computations live in $\mathbb{F'} \simeq \mathbb{F} \cap \mathbb{R}$, assimilated to a finite subset of $\mathbb{R}$. ### Partial Definition of $\llbracket e \rrbracket_f$: (with rounding towards $+\infty$) - $\llbracket e_1 \oplus e_2 \rrbracket_f(\rho) \overset{\text{def}}{=} \{ R(v_1 + v_2) \mid v_1 \in \llbracket e_1 \rrbracket_f(\rho), \ v_2 \in \llbracket e_2 \rrbracket_f(\rho) \}$, - etc. - $R(x) \overset{\text{def}}{=} \begin{cases} \Omega & \text{if } x = \Omega \text{ or } x > 2^{127}(2 - 2^{-23}) \\ \min \{ y \in \mathbb{F'} \mid y \geq x \} & \text{otherwise} \end{cases}$ - etc. The interval domain is easy to adapt. We simply round lower bounds toward \(-\infty\) and upper bounds toward \(+\infty\). Relational domains **cannot** manipulate floating-point expressions. Such domains require properties of \(\mathbb{Q}\), \(\mathbb{R}\) not true in floating-point arithmetics! \[ (X \leq c) \land (Y \leq Z) \implies (X + Y \leq c + d) \] \[ (X \ominus Y \leq c) \land (Y \ominus Z \leq d) \nleftrightarrow (X \ominus Z \leq c \oplus d) \] \[ (10^{22} \oplus 1.000000019 \cdot 10^{38}) \ominus (10^{22} \ominus 1.000000019 \cdot 10^{38}) = 0 \] **Solution:** - \([e]_f\) is abstracted as a **linear interval form on** \(\mathbb{Q}\). - Invariant semantics will be expressed **using** \(\mathbb{Q}\), \(+\), \(−\), \(\ldots\) not \(\mathbb{F}'\), \(\ominus\), \(\ominus\). \(\implies\) We keep the same abstract domains and operators as before. Floating-Point Linearisation Rounding Error on Linear Forms: Its magnitude is the maximum of: - a relative error $\varepsilon$ of amplitude $2^{-23}$, expressed as a linear form: \[ \varepsilon([a, b] + \sum_i [a_i, b_i] \times V_i) \] \[ \overset{\text{def}}{=} \max(|a|, |b|) \times [-2^{-23}, 2^{-23}] + \sum_i (\max(|a_i|, |b_i|) \times [-2^{-23}, 2^{-23}]) \times V_i \] (normalised numbers) - an absolute error $\omega \overset{\text{def}}{=} [-2^{-159}, 2^{-159}]$ (denormalised numbers). $\Rightarrow$ We sum these two causes of rounding. Linearisation $\langle e \rangle_f$: - $\langle e_1 \oplus e_2 \rangle_f(X^\#) \overset{\text{def}}{=} \langle e_1 \rangle_f(X^\#) \oplus \langle e_2 \rangle_f(X^\#) \cdot \varepsilon(\langle e_1 \rangle_f(X^\#)) \cdot \varepsilon(\langle e_2 \rangle_f(X^\#)) \cdot \omega$ - $\langle [a, b] \otimes e_2 \rangle_f(X^\#) \overset{\text{def}}{=} ([a, b] \otimes \langle e_2 \rangle_f(X^\#)) \cdot ([a, b] \otimes \varepsilon(\langle e_2 \rangle_f(X^\#))) \cdot \omega$ - etc. **Application of Floating-Point Linearisation** **Abstract Assignment:** \( V \leftarrow e \) We first evaluate \( e \) in the floating-point interval domain. - If there is no run-time error \( \Omega \) detected, then \[ \forall \rho \in \gamma(X^\#), \ [e]_f(\rho) \subseteq \llbracket (e)_{f(X^\#)} \rrbracket(\rho) \] and we can feed \( \{ V \leftarrow (e)_{f(X^\#)} \}^\# \) to an abstract domain in \( \mathbb{Q} \). - If \( \Omega \) is detected, we can still fall back to the interval domain. **Example:** \[ \begin{align*} Z \leftarrow X \ominus (0.25 \otimes X) & \quad \text{is linearised as} \\ Z \leftarrow ([0.749 \cdots, 0.750 \cdots] \times X) + (2.35 \cdots 10^{-38} \times [-1, 1]) \end{align*} \] - Allows simplification even in the interval domain. e.g., if \( X \in [-1, 1] \), we get \( |Z| \leq 0.750 \cdots \) instead of \( |Z| \leq 1.25 \cdots \) - Allows using a relational abstract domain. (zone, etc.) Floating-Point Octagons We are now sound, but not very efficient: abstract operations are expressed in $\mathbb{Q}$. This requires costly arbitrary precision exact rational packages! **Solution:** Perform all abstract computations in $\mathbb{F}$: - **linearisation:** use sound floating-point interval arithmetics, - **octagon domain:** upper bounds computation are rounded towards $+\infty$. We lose some precision... We gain much speed. **Note:** Sound algorithms in $\mathbb{F}$ are much harder to provide for polyhedra! Floating-Point Abstractions To sum up, the following sound approximations are made: 1. **linearisation**: rounding errors are treated as non-deterministic, 2. **linearisation**: non-linear computations are “intervalised”, 3. **abstract domain**: limits the expressiveness, 4. **abstract operators**, 5. **implementation in \( \mathbb{F} \)**: extra rounding errors. Due to 1 and 5, our best abstraction results no longer hold! Despite unpredictable 5, abstract computations are stable in many cases: - when concrete computations are naturally **contracting**, e.g., \( X \leftarrow 0.5X + [-1, 1] \), - when concrete computations have explicit **limiters**, - specific **widenings** and **narrowings** can help. Some more theoretical work is needed to characterise the stability. Application to Astrée Presentation of Astrée Astrée: - Static analyser developed at the ENS. - Checks for run-time errors in reactive C code. (integer and float overflows, etc.) - Aimed at proving automatically the correctness: 0 alarm goal. Analysed Code Features: A real-life example: - primary flight control software for the Airbus A340 fly-by-wire system, - 70 000 lines of C, - 10 000 global variables, 5 000 of which are 32-bit floating-point, - one very large loop executed $3 \times 10^6$ times. Numerical Abstract Domain Choice Astrée uses the **octagon** domain preferably to the polyhedron domain because: - it has a much **smaller asymptotic cost**, but also, - it has **interval linear form operators** able to abstract float expressions, - it can be easily **implemented using float numbers**. Whenever possible, Astrée uses the **interval** domain with **linearisation** and **symbolic constant propagation** because it has a quasi-linear cost. **Packing** is used to limit the use of non-linear cost domains. All **relational** domains are built on top of our **floating-point linearisation**: - the octagon domain, - filters [Feret04] and arithmetic-geometric progression domains [Feret05]. Packing Results There are too many variables even for the octagon domain \(\implies\) we use packing. **Automatic Packing:** Using simple syntactic criteria - associate one pack per syntactic block, - put only variables related in the block’s expressions, ignoring sub-blocks, - ignore obviously non-linear terms, - relate variables in tests to both the directly enclosing and nested blocks. **Results:** <table> <thead> <tr> <th># lines</th> <th># variables</th> <th># packs</th> <th>avg. size</th> <th>$\sqrt{\sum \text{size}^2}$</th> <th>$\sqrt[3]{\sum \text{size}^3}$</th> </tr> </thead> <tbody> <tr> <td>370</td> <td>100</td> <td>20</td> <td>3.6</td> <td>4.8</td> <td>6.2</td> </tr> <tr> <td>9 500</td> <td>1 400</td> <td>200</td> <td>3.1</td> <td>4.6</td> <td>6.6</td> </tr> <tr> <td>70 000</td> <td>14 000</td> <td>2 470</td> <td>3.5</td> <td>5.2</td> <td>7.8</td> </tr> <tr> <td>226 000</td> <td>47 500</td> <td>7 429</td> <td>3.5</td> <td>4.5</td> <td>5.8</td> </tr> <tr> <td>400 000</td> <td>82 000</td> <td>12 964</td> <td>3.3</td> <td>4.1</td> <td>5.3</td> </tr> </tbody> </table> \(\implies\) Cost is a **linear** function of code size: the method is **scalable**. ## Analysis Results On a 64-bit AMD Opteron 248, mono-processor. <table> <thead> <tr> <th># lines</th> <th>without symbolic</th> <th></th> <th>without octagon</th> <th></th> <th>with everything</th> <th></th> </tr> </thead> <tbody> <tr> <td></td> <td>time</td> <td>memory</td> <td>alarms</td> <td>time</td> <td>memory</td> <td>alarms</td> </tr> <tr> <td>370</td> <td>1.8s</td> <td>16 MB</td> <td>0</td> <td>1.7s</td> <td>14 MB</td> <td>0</td> </tr> <tr> <td>9,500</td> <td>90s</td> <td>81 MB</td> <td>8</td> <td>75s</td> <td>75 MB</td> <td>8</td> </tr> <tr> <td>70,000</td> <td>2h 40mn</td> <td>559 MB</td> <td>391</td> <td>3h 17mn</td> <td>537 MB</td> <td>58</td> </tr> <tr> <td>226,000</td> <td>11h 16mn</td> <td>1.3 GB</td> <td>141</td> <td>7h 8mn</td> <td>1.0 GB</td> <td>165</td> </tr> <tr> <td>400,000</td> <td>22h 8mn</td> <td>2.2 GB</td> <td>282</td> <td>20h 31mn</td> <td>1.7 GB</td> <td>804</td> </tr> </tbody> </table> ⇒ Our work is instrumental in proving the code correctness! **Note:** Results date back from a few months; they have improved since. Analysis Screenshot Conclusion Summary To sum up we proposed: - **New relational abstract domains between intervals and polyhedra.** Provides new theoretical results. (properties of closure) Design and proofs of soundness, exactness, best precision of abstract operators. - **Generic techniques for the local enhancement of domains:** Linearisation, symbolic constant propagation. Avoid the need for more expressive domains. - **Adaptation to floating-point arithmetics.** First relational domains to relate floating-point variable values. - **Integration within the Astrée analyser.** Motivated new researches. (abstract operators, packing, etc.) Provided experimental results on real-life examples. Future Work - Extent the *spectrum choice for cost vs. precision trade-offs*: - Define new abstract domains. (e.g., between octagons and polyhedra; Octahedra, TVPI) - Define alternate abstract operators. (fine-grain control, widenings) - Local refinement techniques, non-homogeneous precision. (extend packing) - Theoretical results on linearisation and symbolic propagation techniques. (precision guarantees) - Consider *new* numerical properties, *adapted to*: - Complex numerical algorithms. (finite elements methods) - Non-numerical properties parametrised by a numerical domain. (e.g., non-uniform pointer analysis) - Parametric predicate abstractions. (complex functional properties, e.g., sorting algorithms) Thank you for your attention!
{"Source-Url": "https://www-apr.lip6.fr/~mine/publi/expose-mine-nsad05.pdf", "len_cl100k_base": 14035, "olmocr-version": "0.1.53", "pdf-total-pages": 63, "total-fallback-pages": 0, "total-input-tokens": 142240, "total-output-tokens": 17376, "length": "2e13", "weborganizer": {"__label__adult": 0.0003688335418701172, "__label__art_design": 0.0008454322814941406, "__label__crime_law": 0.0004668235778808594, "__label__education_jobs": 0.0013608932495117188, "__label__entertainment": 0.00010126829147338869, "__label__fashion_beauty": 0.0002081394195556641, "__label__finance_business": 0.0004639625549316406, "__label__food_dining": 0.0004763603210449219, "__label__games": 0.0008678436279296875, "__label__hardware": 0.0016641616821289062, "__label__health": 0.0007138252258300781, "__label__history": 0.0005350112915039062, "__label__home_hobbies": 0.0002803802490234375, "__label__industrial": 0.0011920928955078125, "__label__literature": 0.0004527568817138672, "__label__politics": 0.0004391670227050781, "__label__religion": 0.0008492469787597656, "__label__science_tech": 0.158203125, "__label__social_life": 0.0001804828643798828, "__label__software": 0.00991058349609375, "__label__software_dev": 0.81884765625, "__label__sports_fitness": 0.0003662109375, "__label__transportation": 0.0008745193481445312, "__label__travel": 0.0002758502960205078}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 40146, 0.02386]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 40146, 0.27465]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 40146, 0.54863]], "google_gemma-3-12b-it_contains_pii": [[0, 186, false], [186, 891, null], [891, 1328, null], [1328, 1835, null], [1835, 2413, null], [2413, 2852, null], [2852, 3252, null], [3252, 3627, null], [3627, 3644, null], [3644, 4626, null], [4626, 5524, null], [5524, 6359, null], [6359, 7027, null], [7027, 7977, null], [7977, 8002, null], [8002, 8538, null], [8538, 9250, null], [9250, 10230, null], [10230, 11429, null], [11429, 12206, null], [12206, 13122, null], [13122, 14139, null], [14139, 14840, null], [14840, 15790, null], [15790, 16473, null], [16473, 16501, null], [16501, 16892, null], [16892, 17828, null], [17828, 18369, null], [18369, 18988, null], [18988, 19974, null], [19974, 20641, null], [20641, 21745, null], [21745, 22563, null], [22563, 22587, null], [22587, 23094, null], [23094, 23830, null], [23830, 24200, null], [24200, 24229, null], [24229, 25146, null], [25146, 25925, null], [25925, 26766, null], [26766, 27492, null], [27492, 28501, null], [28501, 28530, null], [28530, 29204, null], [29204, 30121, null], [30121, 31030, null], [31030, 31903, null], [31903, 32938, null], [32938, 33886, null], [33886, 34416, null], [34416, 35202, null], [35202, 35224, null], [35224, 35711, null], [35711, 36421, null], [36421, 37676, null], [37676, 38655, null], [38655, 38675, null], [38675, 38686, null], [38686, 39385, null], [39385, 40117, null], [40117, 40146, null]], "google_gemma-3-12b-it_is_public_document": [[0, 186, true], [186, 891, null], [891, 1328, null], [1328, 1835, null], [1835, 2413, null], [2413, 2852, null], [2852, 3252, null], [3252, 3627, null], [3627, 3644, null], [3644, 4626, null], [4626, 5524, null], [5524, 6359, null], [6359, 7027, null], [7027, 7977, null], [7977, 8002, null], [8002, 8538, null], [8538, 9250, null], [9250, 10230, null], [10230, 11429, null], [11429, 12206, null], [12206, 13122, null], [13122, 14139, null], [14139, 14840, null], [14840, 15790, null], [15790, 16473, null], [16473, 16501, null], [16501, 16892, null], [16892, 17828, null], [17828, 18369, null], [18369, 18988, null], [18988, 19974, null], [19974, 20641, null], [20641, 21745, null], [21745, 22563, null], [22563, 22587, null], [22587, 23094, null], [23094, 23830, null], [23830, 24200, null], [24200, 24229, null], [24229, 25146, null], [25146, 25925, null], [25925, 26766, null], [26766, 27492, null], [27492, 28501, null], [28501, 28530, null], [28530, 29204, null], [29204, 30121, null], [30121, 31030, null], [31030, 31903, null], [31903, 32938, null], [32938, 33886, null], [33886, 34416, null], [34416, 35202, null], [35202, 35224, null], [35224, 35711, null], [35711, 36421, null], [36421, 37676, null], [37676, 38655, null], [38655, 38675, null], [38675, 38686, null], [38686, 39385, null], [39385, 40117, null], [40117, 40146, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 40146, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 40146, null]], "pdf_page_numbers": [[0, 186, 1], [186, 891, 2], [891, 1328, 3], [1328, 1835, 4], [1835, 2413, 5], [2413, 2852, 6], [2852, 3252, 7], [3252, 3627, 8], [3627, 3644, 9], [3644, 4626, 10], [4626, 5524, 11], [5524, 6359, 12], [6359, 7027, 13], [7027, 7977, 14], [7977, 8002, 15], [8002, 8538, 16], [8538, 9250, 17], [9250, 10230, 18], [10230, 11429, 19], [11429, 12206, 20], [12206, 13122, 21], [13122, 14139, 22], [14139, 14840, 23], [14840, 15790, 24], [15790, 16473, 25], [16473, 16501, 26], [16501, 16892, 27], [16892, 17828, 28], [17828, 18369, 29], [18369, 18988, 30], [18988, 19974, 31], [19974, 20641, 32], [20641, 21745, 33], [21745, 22563, 34], [22563, 22587, 35], [22587, 23094, 36], [23094, 23830, 37], [23830, 24200, 38], [24200, 24229, 39], [24229, 25146, 40], [25146, 25925, 41], [25925, 26766, 42], [26766, 27492, 43], [27492, 28501, 44], [28501, 28530, 45], [28530, 29204, 46], [29204, 30121, 47], [30121, 31030, 48], [31030, 31903, 49], [31903, 32938, 50], [32938, 33886, 51], [33886, 34416, 52], [34416, 35202, 53], [35202, 35224, 54], [35224, 35711, 55], [35711, 36421, 56], [36421, 37676, 57], [37676, 38655, 58], [38655, 38675, 59], [38675, 38686, 60], [38686, 39385, 61], [39385, 40117, 62], [40117, 40146, 63]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 40146, 0.02887]]}
olmocr_science_pdfs
2024-12-10
2024-12-10
6fee20320c857d17fb21fa01011ab8b21f977410
The Harvard community has made this article openly available. Please share how this access benefits you. Your story matters. <table> <thead> <tr> <th></th> <th></th> </tr> </thead> <tbody> <tr> <td>Published Version</td> <td><a href="http://doi.acm.org/10.1145/502716.502722">http://doi.acm.org/10.1145/502716.502722</a></td> </tr> <tr> <td>Accessed</td> <td>December 26, 2017 8:35:08 PM EST</td> </tr> <tr> <td>Citable Link</td> <td><a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600">http://nrs.harvard.edu/urn-3:HUL.InstRepos:2252600</a></td> </tr> <tr> <td>Terms of Use</td> <td>This article was downloaded from Harvard University's DASH repository, and is made available under the terms and conditions applicable to Other Posted Material, as set forth at <a href="http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA">http://nrs.harvard.edu/urn-3:HUL.InstRepos:dash.current.terms-of-use#LAA</a></td> </tr> </tbody> </table> (Article begins on next page) A Writer's Collaborative Assistant Tamara Babaian CIS Dept., Bentley College Waltham, MA 02452 tbabaian@bentley.edu Barbara J. Grosz DEAS, Harvard University Cambridge, MA 02138 grosz@deas.harvard.edu Stuart M. Shieber DEAS, Harvard University Cambridge, MA 02138 shieber@deas.harvard.edu Abstract In traditional human-computer interfaces, a human master directs a computer system as a servant, telling it not only what to do, but also how to do it. Collaborative interfaces attempt to realign the roles, making the participants collaborators in solving the person's problem. This paper describes Writer's Aid, a system that deploys AI planning techniques to enable it to serve as an author's collaborative assistant. Writer's Aid differs from previous collaborative interfaces in both the kinds of actions the system partner takes and the underlying technology it uses to do so. While an author writes a document, Writer's Aid helps in identifying and inserting citation keys and by autonomously finding and caching potentially relevant papers and their associated bibliographic information from various on-line sources. This autonomy, enabled by the use of a planning system at the core of Writer's Aid, distinguishes this system from other collaborative interfaces. The collaborative design and its division of labor result in more efficient operation: faster and easier writing on the user's part and more effective information gathering on the part of the system. Subjects in our laboratory user study found the system effective and the interface intuitive and easy to use. 1. Introduction and Motivation In traditional human-computer interfaces, a person acts as the master directing a computer-system servant. Collaborative interfaces [17] attempt to realign the roles, making the participants collaborators in solving the user's problem. Formal models of collaboration [5, 8, 7] identify as some of the key features of a collaborative activity commitment to a shared, or joint, goal; an agreed-on division of labor; and communication between the parties to enable the satisfaction of joint goals. Whereas in a traditional interface the human user is the repository of all goals and takes all the initiative in determining ways to satisfy them, in a collaborative interface the participants establish shared goals and both take initiative in satisfying them. For example, the GLIDE system [16] is a network-diagram layout tool in which the user and the computer simultaneously and seamlessly work to satisfy the user's layout goals. Goal-sharing is achieved by the user's conveying layout goals through direct manipulation, and the division of labor in achieving the goals is implicit in the design of the system as a whole. Thus, a level of collaboration is achieved without explicit reasoning about goals or the state of the world. The Distributed Information Access for Learning (DIAL) system [13] provides for multi-media interactions with a complex information system; DIAL works with users to identify information relevant to their needs. The manner in which DIAL interacts collaboratively derives from the SharedPlans theory of collaboration [7]. DIAL uses explicit representations of recipes for domain actions and reasons about intentional contexts to lessen the amount of information a user needs to provide in querying the system. It demonstrates both the efficacy of deploying a model of collaboration to inform the design of a system and the system limitations that arise from limited reasoning about knowledge and actions. GLIDE and DIAL were designed to directly implement key features of a formal model of collaboration, handling various belief and intentional constructs implicitly. The formal model of collaboration is used as a design guide in the design of the system, but is not reasoned with directly. An alternative design philosophy is found in the Collagen system [14], in which the formal model is directly reasoned with, mechanisms are incorporated to manage databases of beliefs and intentions, and a recipe library of predefined plans is used. In this case, the formal model of collaboration is treated as a specification of the implementation. In this paper, we explore another part of the design space of collaborative interfaces. We describe a writer's collaborative assistant, implemented in a system called Writer's Aid, designed to support an author's writing efforts by performing various bibliographic tasks that typically arise in the process of writing a research manuscript. As in GLIDE and DIAL, Writer's Aid follows the design-guide approach. Also like earlier systems, the division of labor between the user and Writer's Aid is predefined and constant. A distinguishing feature of Writer's Aid is its ability to autonomously generate and execute plans to achieve goals provided by the user and adopted by the system. This autonomy, enabled by use of automated planning, also distinguishes Writer's Aid from other collaborative interfaces with predefined recipes. It en- ables **Writer's Aid** to act as a robust collaborative partner, undertaking tasks in the service of a joint goal (producing a manuscript with well-formed citations) and pursuing all known avenues to accomplish those tasks. The use of planning to organize the behavior of a collaborative system is especially important in tasks for which there is more than one possible course of action and where some of the actions may unpredictably fail. Dealing with bibliographic records and papers is one such problem domain. Papers and bibliographic information are often available from multiple electronic sources such as digital libraries, author’s homepages, and on-line bibliographies. It is burdensome for a person to search systematically and thoroughly different sources to locate papers and tedious for people to compose bibliographic records. Because Internet searches are typically incomplete, many authors also must consult hard copies of journals and conference proceedings. The creation of citations is also disruptive to the writing process. Most of such work is more appropriately done by a computer system that can plan for a wide variety of approaches to data gathering and pursue them exhaustively. Similarly, many actions, such as accessing bibliographic databases or web resources, can fail (for instance, due to a server failure). In such a case, a planner can dynamically recover and replan, efficiently reusing already obtained information, until a goal is satisfied or all ways of satisfying it fail. Planning has proven advantages in the task of information integration from multiple distributed sources; it hides from the user the process of data acquisition and manipulation [1, 10]. We take this idea further and weave such information integration into an ongoing human-computer collaboration on a broader task that is the source of the information need. This setup creates advantages for both parties and thus results in more efficient overall execution of the task. The user’s simultaneous involvement in editing the paper and expertise in the particular academic field provides the computer-assistant with highly selective query terms and thus results in a high likelihood of **Writer’s Aid** autonomously finding the necessary information. The system’s performance of various search and formatting actions saves the writer time and effort identifying and creating bibliographic records and locating viewable versions of cited papers, enabling more efficient paper writing. Besides being a natural framework for reasoning about goals and actions, planning offers advantages from the design and implementation standpoints. The declarative nature of planning-based interfaces allows extending the system by adding new types of user goals, new information sources, and new information retrieval actions independently of the existing code. As reported by Barish *et al.* [3] and confirmed by our own experience with **Writer’s Aid**, once the planning structure is in place, designing, extending and modifying the system in response to users’ requests required relatively little effort. This flexibility ensures that with more and more specialized searchable collections appearing on the Internet, **Writer’s Aid**’s repertoire of available search methods and sources will be easily augmented. Initial laboratory user studies have shown **Writer’s Aid** meets its design goals. In particular, most subjects (like many authors who are fluent in web technologies) ordinarily perform a sequence of online searches for bibliographic information and papers similar to those done by **Writer’s Aid**. Even for such users, **Writer’s Aid**’s freeing them from doing these tasks and providing relevant information during the writing process in a timely manner was of significant help. An overwhelming majority of users found the system useful (some characterizing it as *very useful*), reflecting how often it was able to find papers the user intended to cite. Users found the interface intuitive and easy to learn. These results are all the more impressive because little attention was spent in fine-tuning the surface features of **Writer’s Aid**; for example, the tested version of **Writer’s Aid** did not use any advanced context-based rank-ordering of the search results. A further example of **Writer’s Aid**’s usefulness is the preparation of this paper: some of the references cited were identified using **Writer’s Aid** and some of the bibliographic records and all inline citations were done by the system. **Writer’s Aid** is implemented on top of Carsten Dominik’s RefTex package for the GNU Emacs editor, and the BiBTeX and BiBTeXX document typesetting systems. The front end is implemented in Emacs Lisp, the planner in Allegro Common Lisp, and web access in WebL [9]. **Writer’s Aid** is activated when the user opens a TeX document in the Emacs text editor. After giving an example to illustrate the use and advantages of **Writer’s Aid**, the paper enumerates characteristics of the bibliographic domain and task that underlie the design choices in **Writer’s Aid** and then presents details of the system. The system description includes a discussion of the major issues that arise in building collaborative interfaces that utilize planning in domains with incomplete information, especially the implications for the system architecture and knowledge representation and planning methods. We briefly outline extensions to classical planning methods to meet the demands of collaborative interfaces in domains with properties like the **Writer’s Aid**’s. The paper then presents results of initial user studies, describes related work, and concludes with a discussion of possible future extensions to the system. ### 2. Overview and Example To illustrate **Writer’s Aid**’s functions and main features, we will explore its use in the following scenario: An author, Ed is writing a paper on collaborative interfaces. He decides to refer to Kinny et al.’s article on teamwork but he does not recall the title of the paper nor where it appeared. He does not want to interrupt his writing to locate the paper, but he does want to scan the paper once it is found to make sure his claims about it are accurate. **Entering a citation command:** Ed inserts a citation command with a special Emacs command. The system then prompts him to enter search parameters: keywords of the search and an indication of whether he wants only the bibliographic data on papers or the viewable versions as well. Ed enters *Kinny* and *team* as search keywords and selects the option of obtaining bibliographic records and viewable versions of relevant papers. After a citation command is issued, a label resembling the TeX ordinary citation command is automatically generated and placed in the body of text. The label displays the type, keywords and status of the citation command as shown in Figure 1. The labels include the search keywords and type of search, a word indicating the status (**SEARCHING** or **DONE**) and the number of bibliographic records and viewable papers found in reference to the particular citation command; they may be updated to reflect the most recent findings by a simple user request. While Ed continues writing (and inserting other citation commands) Writer’s Aid plans and executes a search for the material he has requested. To make the search more efficient and better suited to Ed’s needs, Writer’s Aid limits the search for bibliographic information and papers to his preferred bibliographies and paper collections. Writer’s Aid identifies preferred bibliographies semi-automatically at the time of installation by searching a user’s home directory for his own bibtex files and inspecting his browser’s bookmarks. At installation time, Writer’s Aid has identified as Ed’s preferred bibliographies his own bibtex files and two on-line scientific collections: ResearchIndex and ACM Digital Library. It constructs a plan to query Ed’s preferred bibliographic collections for the list of bibliographic records of papers that are related to the keywords Kinny and team. Once Writer’s Aid has collected the list of relevant paper titles from Ed’s bibtex file, ResearchIndex and ACM Digital Library it attempts to locate viewable version of each identified paper. Writer’s Aid’s arsenal includes actions for parsing bibtex files; querying various digital repositories (currently NEC Research Institute’s ResearchIndex and the ACM Digital Library) in search for papers, paper titles and authors’ homepages; parsing homepages in search for papers with a given title; and downloading papers from a given URL. Reviewing the results and selecting citation item: To view the data that Writer’s Aid has collected in response to the citation command, Ed puts the cursor at the body of the citation command and issues a command to display the search results. The list of paper titles that has been compiled is displayed in a separate window, while the following options are a single keypress away: viewing and editing the bibtex record for an item; viewing the text of the paper, if it is available; selecting an item for citation. The prompt on the bottom of the selection buffer displays a help line with the commands for each option (see Figure 1). Ed reviews the list, scanning some of the papers by issuing a view command until he identifies the paper he wants to cite, namely “Planned Team Activity”. He selects this paper with a single keystroke, and Writer’s Aid ensures the citation is ready for compilation, that is, the appropriate bibliographic record is inserted in the bibliography file and the key for that record is placed in the text of the paper. 3. The Citation Application Domain The Writer’s Aid application has several characteristics that influenced the design of the system architecture and its constituent knowledge representation, reasoning, and planning systems. These requirements arise from two sets of characteristics: characteristics of the interface, that is, capabilities desired in the interaction with a person, and characteristics of the domain, that is the properties of references and citations. These characteristics also appear in many other applications for which collaborative interface systems would be beneficial, and hence their effect on system design are relevant beyond this particular application. We briefly describe these characteristics and their implications for the design and implementation of the collaborative interface system. 3.1 Interface Characteristics We discuss three interface requirements in this section, along with their implications for the implemented system. These requirements were considered in the initial design of the collaborative interface and later refined given the observations and interviews from our pilot user studies. Anytime editing/search/access capability: A key requirement of the interface is the seamless integration of the search and selection of papers for citation with the process of writing. A user can insert new citation commands and access possibly incomplete results of the search for any of the citation commands at any time while writing or editing a paper. To guarantee the user fast and effective access to bibliographic information for all citations, information requests arising from citation commands are processed in a round-robin fashion, working on tasks in the order of increasing complexity. For instance, querying a bibliography for relevant bibliographic records is easier and faster than searching for the viewable version of a paper. As a result, Writer’s Aid first attempts to locate the bibliographic records for all citations, and postpones attempting to satisfy goals related to obtaining their viewable versions.1 Availability of partial results and search status: A user can access the results of a search and make a selection at any time, even when the search has not yet completed. When using Writer’s Aid, a person’s primary task, and hence focus, is typically on writing the paper. As a result, users usually do not explicitly monitor the progress of the system. However, Writer’s Aid informs the user of the progress of the search by updating the body of the citation command appearing in the text of the paper (see Figure 1). The display of search-status information is helpful in two ways: It enables early detection of queries that produce no matches (allowing reformulation of the citation command), and it is a way to inform users about completion status of a citation, before they start reviewing and selecting from the list of papers. 3.2 Domain Characteristics The domain of Writer’s Aid has two characteristics that directly affect the types of technology used in the underlying system, both relating to the incompleteness of the information possessed by the system. A major challenge to systems design is the inherent incompleteness of information about Writer’s Aid’s domain: bibliographic records, papers, their locations, keywords. A complete description of this domain cannot be provided a priori and can never be fully acquired. Rather, the system must be able to represent partial information and to reason about acquiring missing information that is necessary to satisfy the planning goals related to a user’s citation needs. Further, Writer’s Aid’s domain knowledge has local incompleteness; it is incomplete even with respect to properties of the objects the system knows about. For instance, it may not know which papers have a particular keyword in their abstracts or where viewable versions of a paper are located. As a result, actions in the bibliographic domain rely heavily on information gathering to in turn affect the actions to be taken. 1However, a user can override this default and can focus Writer’s Aid specifically on getting a particular paper by using a special immediate citation command. The search for materials related to immediate citation is not abandoned until all possibilities are attempted, that is, until all related planning goals are either satisfied or found unsatisfiable. Figure 1: A snapshot of Writer’s Aid. In the middle Emacs window, the user has entered a set of citations in the text of a paper. The body of the citation command displays the status of the searches, the first of which is completed. The user is browsing the paper list from one of the incomplete searches in the front window. The rear window is showing the first paper from the list, retrieved by a single keystroke. taken subsequently. For example, the results of a query for relevant papers may determine which viewable versions of papers the system acquires. The system must therefore be able to interleave information acquisition and planning; this is a special case of interleaved planning and plan execution. Classical planning techniques are insufficient to handle these properties of the domain. To address inherent incompleteness, Writer’s Aid uses an expressive yet tractable logic, PSIPLAN[2], which allows efficient representation of incomplete information. To address local incompleteness and allow for information gathering, Writer’s Aid deploys a novel method for combining planning with execution of incomplete plans, which we call planning with hypotheticals. These important technical aspects of our solution are described in a later section. The domain characteristics interact with the interface characteristics. For instance, since Writer’s Aid begins with little knowledge about papers relevant to the user’s request, a substantial amount of information gathering may be required to satisfy a user’s requests. Because most of the information is obtained from remote sources over the Internet, it may take considerable time to identify, locate and download all of this information. On the other hand, it is very likely that the user will be satisfied with only partial results of the search, as conventional search engines often provide only partial results. To make partial results quickly available to the user (an important interface characteristic), Writer’s Aid’s design includes (i) formulation of the information request into a set of goals, processed in order of increasing likelihood of relevancy to the user, (ii) initial goal reduction to account for already available information, and (iii) round-robin processing of information requests in order of increasing search complexity. These features are described in more detail in the next sections of the paper. 4. Architecture Overview The architecture of Writer’s Aid contains the following three major components in addition to a front-end Emacs interface: - **State of Knowledge (SOK) and Goal (G) databases**: The SOK database contains Writer’s Aid’s knowledge about the user’s preferences and the world of bibliographies, papers and paper sources. The G database records the system’s goals. - **The Reasoning module (R)**: This module handles goal reduction with respect to the SOK database. In brief, Writer’s Aid uses these components to handle a user’s citation command as follows: The command itself results in a goal being posted to the goal database $G$ and the goal reduction module $R$ being invoked as a separate thread. $R$ consults the SOK database and computes the part of the goal that is already accomplished and the part that still remains to be achieved. It places the latter onto $G$, passing it to the planning problem manager, PPM. The PPM module creates an instance of a planning problem and hands it to the planner, PSIPOP-SE, which either constructs and executes a plan or reports failure if the planning problem is unsolvable. Upon executing the plan actions, Writer’s Aid updates the SOK database to reflect all changes in knowledge. For example, additional knowledge generated by an information-gathering action is added. Upon completion of its part, PPM removes the goals that were satisfied from the goal agenda, records the failure for the (sub)goals that PPM failed to achieve, and proceeds with the next goal. When a user issues a command to view a list of records and papers corresponding to a citation command, this information is derived from the SOK, formatted, and presented in a separate window for browsing. 4.1 SOK and Goal Formulation All of Writer’s Aid’s knowledge about the world is contained in the SOK database. As discussed above, this knowledge is assumed to be correct but incomplete. Since the system cannot have access to a complete description of the world, it must be able to effectively represent, reason, and plan with incomplete knowledge. Writer’s Aid uses the PSIPLAN language [2] which enables efficient representation of an agent’s incomplete knowledge about the world and knowledge goals and has an associated knowledge update procedure that is efficient. As described in the language specification [2], PSIPLAN entailment is sound, complete, and takes only polynomial time in the size of the agent’s SOK database. Alternative planning representations are either intractable in the general case, or, as with the tractable LCW (locally closed world) representation [6], lack completeness and sometimes discard correct information. Precision in reasoning about the world in the presence of the unknown bears directly on the ability to have non-redundancy of information gathering; it is thus especially critical for a system that uses costly (time-consuming) information-gathering actions. Incompleteness of reasoning may cause failure to construct all possible plans, which is also problematic for a collaborative agent. PSIPLAN formulas are either ground atoms over function-free terms, universally quantified negated clauses with exceptions, or knowledge propositions. For example the statement The only bibliographies preferred by Ed are the digital library of the ACM, and maybe the ResearchIndex database. is represented in PSIPLAN by the following two propositions: 1. ACM’s digital library is a preferred bibliography, which is represented by a ground atom: $$\text{PrefBib(ACM)}$$ 2. Nothing is a preferred bibliography except for ACM and the ResearchIndex, which is expressed as the following quantified negated clause with exceptions: $$\forall b \neg \text{PrefBib(b)} \lor b = \text{ACM} \lor b = \text{RI}$$ To represent that a value of a certain proposition is known, PSIPLAN uses knowledge propositions; $\text{KW(\text{PrefBib(ACM)})}$ denotes that the agent knows the truth value of $\text{PrefBib(ACM)}$, that is, the agent knows whether ACM is a preferred bibliography. To represent the user’s goals, Writer’s Aid extends PSIPLAN to handle implication goals of the form $\forall \bar{x} \exists y P(\bar{x}, y) \implies Q(\bar{x}, y)$, where $\bar{x}$ and $\bar{y}$ are sets of variables, and both $P$ and $Q$ are conjunctions of atoms. A user’s request to obtain papers relevant to subject $Y$ is formulated as the following goal: For each paper that is relevant to subject $Y$ according to some bibliography preferred by Ed, get that paper and get the bibliographic record for it. This goal is instantiated as three separate PSIPLAN goal formulas. The first goal is to obtain all papers and bibliographic records of papers containing keywords $Y$ in the title and referenced in the user’s own local bibliographic collections: $$\forall p \exists b \text{PrefBib}(b) \land \text{LocalBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p)$$ The second goal extends the first to all of the user’s preferred bibliographic collections. $$\forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TitleUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p)$$ The last goal is to obtain all papers containing keywords $Y$ in the text, rather than in the title. $$\forall p \exists b \text{PrefBib}(b) \land \text{InCollection}(p, b) \land \text{TextUses}(p, Y) \implies \text{GotBib}(p) \land \text{GotBib}(p)$$ The first goal is entailed by the second, which is entailed by the third; thus, the set of papers required by the first goal is subsumed by the set of second goal’s papers, which, in turn, is subsumed by the third goal (since a title is a part of the text). However, these three goals are posted and processed in the order presented above to explicitly prioritize completeness. In this section, we use the following predicates: PrefBib(b) denotes that $b$ is a preferred bibliography; LocalBib(b) denotes that $b$ is a locally stored bibtext bibliography; InCollection(b, p) denotes paper $p$ being in collection of bibliography $b$; TitleUses(p, Y) denotes that keywords $Y$ occur in $p$’s title (where by title we mean a combination of the title and author names); TextUses(p, Y) denotes that keywords $Y$ occur in $p$’s full text including the title and author fields; Got(p) and GotBib(p) denote, respectively, that paper $p$ and its bibliographic record are stored locally. the search for papers that are more likely to be in the desired set. Writer’s Aid is able to accomplish this incremental processing without doing redundant searches for the same information by saving in the SOK the information acquired during its attempts to satisfy the first and second goals. 4.2 Goal Reduction Once a goal is posted to the goal database $G$, the goal reduction module $R$ handles the processing of the goal. $R$ chooses a goal from $G$, reducing it with respect to the SOK, and passing it to PPM. When the planner returns, $R$ records success or failure in achieving the goal, and proceeds to the next one. For simplicity of presentation, we abbreviate a conjunction of predicates occurring in the left hand side of goals (1-3) above by a metapredicate $Rel(p, b, Y)$ to indicate that a paper $p$ is relevant to keywords $Y$ according to bibliography $b$, and drop $GotBib(p)$ from the right hand side. Thus, the goal with which we are concerned is $$g = \forall p \exists b \ PrefBib(b) \land Rel(p, b, Y) \implies Got(p) \quad (4)$$ To satisfy this goal, it is first necessary to find all papers that are relevant to $Y$ according to some preferred bibliography and then, for those papers only, construct a plan of obtaining them. Thus, $R$ transforms $g$ into two goals in PSIPLAN’s base language: 1. finding out the truth value of the conjunction $PrefBib(b) \land Rel(p, b, Y)$ for all possible values of $b$ and $p$, i.e. $$g_1 = \forall p \forall b \ KW(PrefBib(b) \land Rel(p, b, Y)),$$ and, after $g_1$ is achieved, 2. instances of $Got(p)$ corresponding to all values of $p$ for which $PrefBib(b) \land Rel(p, b, Y)$ is true. $R$ places $g_1$ as the next goal of $G$ and further reduces it with respect to SOK to identify the part that is not already known (e.g., as a result of previously executed information-gathering actions). This computation corresponds to a special PSIPLAN operation, called extended difference, denoted $\_$. Given PSIPLAN propositions $A$ and $B$, $A \_ B$ is the set of propositions of $A$ that are not entailed by $B$. $R$ reduces any goal $g$ by computing the extended difference $g \_ SOK$. For example, given an information goal $g_1$ and an SOK that contains information that nothing is a preferred bibliography except for possibly the ACM digital library and the ResearchIndex, $R$ deduces that the only remaining information goals are $$g_2 = \forall p \ KW(PrefBib(ACM) \land Rel(p, ACM, Y)),$$ $$g_3 = \forall p \ KW(PrefBib(RI) \land Rel(p, RI, Y)).$$ passing $g_2$ and $g_1$ to the PPM. Such reduction of $g$, if not done prior to planning, would need to be carried out while planning to achieve this goal inside the planner itself. However, in our formalism no information ever gets lost, so that such early separation of yet unknown facts from those already known is an advantage, because it identifies exactly what goal the planner is working to achieve, and the user can access that information while the planner is working on the goal. The advantage becomes even more apparent if we consider having multiple agents working to achieve the goal. In such cases, reducing the goal initially prevents redundant computation. 4.3 Managing Planning Problems Once the reduced goal is computed, it is passed to PPM, the Planning Problem Manager, which takes care of creating, prioritizing, solving, and keeping track of the status of multiple planning tasks arising from goals adopted by Writer’s Aid. PPM consists of two major components: a list of planning problems, and a planning algorithm PSIPOP-SE, which constructs solution plans for individual planning problems. When a goal is passed to PPM, a new planning problem is created and passed to PSIPOP-SE, which searches for a solution plan, and returns the result. Each planning problem is a structure that records a planning goal, its solution, and the overall status of the planning problem, which is one of open, done, unsatisfiable. Open problems are those for which the solution plan has not been found, yet the goal has not yet been found to be unsatisfiable. If a solution plan is found and successfully executed, PPM removes the planning problem from the list of open problems and places it on the done list. If a solution is found but an action execution failure occurs, the failed action instance is recorded and never used again by the planner; the planning problem remains on the open list until the planner establishes that no alternative course of action exists. Unsatisfiable problems are those that have unachievable goals. Iterative Deepening in Hypotheticals: To guarantee step-by-step processing, and availability of partial results of the search for all of the user’s requests as motivated earlier, PPM processes open problems in a round-robin fashion, gradually increasing the maximum complexity level of finding and executing the solution plan. To implement the gradual increase of solution complexity, PPM performs iterative deepening in hypotheticals. A hypothetical is a partial plan that hypothesizes on the value of an unknown proposition or subgoal. For example, having no information on the location of a paper, the planner may adopt a hypothesis that the paper is available from a certain collection, and verify the information by querying the collection. An example of a plan with two hypotheses is a plan that hypothesizes that a paper is available from the author’s homepage, and then, having no information about the author’s homepage, hypothesizes that the URL for the homepage can be found from a known index. By verifying a hypothesis via execution of a sensing action, the planner eventually collects enough information, and thus reduces the incompleteness of the knowledge enough to find a solution plan or find the goal unsatisfiable. PPM maintains a list of all open problems, processed in a loop. At each cycle of the loop PPM attempts to find a solution for each open problem in turn, increasing the maximum allowed number of hypotheses in a solution plan when necessary, and executes the plan until the processing is completed and the problem is removed from the open list. This combination of iterative deepening in hypotheticals with round-robin processing of planning problems enables effective time sharing between the user’s goals, which is necessary for providing partial results on many user requests si- multaneously, and avoiding the bottlenecks of searching for a hard to find paper, which may not be the one desired by the user. 5. Evaluation We performed a pilot study with two users, followed by a user study involving eleven subjects. Most of the subjects were Harvard University students and postdocs; eleven are computer scientists, one a physicist. Most, though not all, of the subjects were familiar with Emacs and had previously written papers using \textsc{BibTeX} and \textsc{BibTeX}. The subjects were shown a brief, two-minute demonstration of the system; they were then given a printed tutorial\footnote{The tutorial is available at \url{http://www.eecs.harvard.edu/~tbabaian/waid/tutor.ps}.} and asked to follow the steps of the tutorial. The subjects were next asked to write a paragraph or two of text in the area of their expertise involving citations, using Writer’s Aid. All the subjects used the same local bibliography collection, which overlapped with some of the citations some subjects could be found by expanding the set of sources to include more online collections. To our surprise, even without access to the writer’s personal \textsc{BibTeX} database, but using only ResearchIndex as another preferred bibliography and the (dynamically located) authors’ homepages in the search for papers, Writer’s Aid was able in most cases to successfully locate at least bibliographic records for the papers. The success rate for finding viewable versions was more modest, but users still found the system very helpful. We expect a higher number of papers could be found by expanding the set of sources to include more online collections. After the test, subjects completed a questionnaire allowing freeform answers to the following questions: 1. How hard was it to learn to use the Writer’s Aid? 2. Was it useful? Would you use it for writing papers? 3. Which modifications to the functionality/interface of Writer’s Aid would you recommend? Some users were later interviewed to clarify their responses to Question 3. The success of Writer’s Aid is indicated by the answers to the Question 2. To the first part “Was Writer’s Aid useful?” the replies were: very useful (3), useful (7), moderately useful (1). To the question “Would you use it for writing papers?” ten users answered yes. (The single dissenting user explained that he would not trust any online source with a bibliographic record, so he would manually verify all such records anyway, making Writer’s Aid redundant in his mind.) To the question How hard was it to learn to use Writer’s Aid? 4 users answered very easy, 2 easy, and 5 reasonably easy or not hard. In response to Question 3, users suggested adding morphology-aware search, automatic spell checking of keywords, an ability to add a record to the personal bibliographic collection without citing it, and minor alterations to the window interface. We are planning to implement some of these features in the next version of Writer’s Aid. 6. Related Work and Future Directions Research presented in this paper has connections to work in several areas, most notably AI-based collaborative interfaces, information integration systems and Internet search. Like many other information integration systems, Writer’s Aid takes advantage of the breadth of bibliographic information available on the web. BIG [10] integrates several AI technologies, including resource-bounded planning and scheduling to conduct an offline search for information on software packages based on a client’s specification. Barish \textit{et al.} [3] report on a query-planning-based system, called TheaterLoc, that searches online movie-related databases in real time in response to users’ queries. Writer’s Aid differs from these and other planning-based information-retrieval systems [11] in carrying out its activities in the context of collaboration with a user in the ongoing writing process, so that this writing process provides context for interpreting the information request. Writer’s Aid is also distinguished from other planning-based information retrieval systems by the capabilities it incorporates for interleaved planning and execution, crucial for integrating information-gathering into the planning process. Collagen [15] is a middleware package based on a theory of collaboration in dialogue [12]; it provides a means for creating interfaces that participate in dialogues with users about their goals and beliefs, suggesting possible courses of action based on the available library of act recipes. Collagen does not include capabilities for automated reasoning about goal achievement beyond the use of a fixed set of recipes. Thus, it lacks Writer’s Aid’s ability to satisfy user goals from almost any initial state using a variety of dynamically created courses of actions. Collagen’s collaborative strength is its ability to work with the user through a process, known (via a recipe library) to the system, leading to achievement of the user’s goal. The focus in Writer’s Aid is on another system capability important for collaboration, namely, the ability to plan for and carry out autonomously a complex task that otherwise would have to be done by the human, and integrating the activities of the system-partner with those of the user in a non-intrusive and efficient manner. Other work has explored the use of context in information retrieval. Watson [4] is intended to work with its user proactively downloading and suggesting information it regards as relevant to a document that the user is currently editing or viewing. Watson creates a search query based on the text and the structure of the document, but not related to any specific user request. However, the user study of Watson [4] evaluated the utility of information provided by Watson statically; it did not involve the system working “alongside” a user. As a result, the appropriateness of Watson’s search results in interactive use was not evaluated in that study. In contrast, Writer’s Aid takes seriously the fact that when users delegate to a system the task of finding information needed to complete a task (or satisfy a user’s goal), the usefulness of the system depends critically on the relevancy of the information retrieved by the system and on the results being available in a timely manner. Otherwise, the time it takes the user to sift through irrelevant information or the time spent waiting for the results may outweigh the time the user saves by not performing the search himself. These performance characteristics in Writer’s Aid are ensured by the system adopting the precisely specified user’s search goal and using information sources that are directly related to a well defined set of data items such as papers and bibliographic records. In the future, we plan to extend Writer’s Aid to incorporate the context of a citation request for more efficient search and ranking of the results. Another direction we have started to explore is adding the user as a source of information about his or her own preferences and knowledge of relevance of various online collections to the subject of a paper. Such personalization tasks can be stated declaratively via a set of knowledge goals and satisfied by an action of querying the writer, when this information becomes necessary. This representation separates personalization of the interface from its overall architecture, making it more easily adjustable. It also leads to preference elicitation that occurs within the context of a particular task. 7. Conclusion We have presented a writer’s assistant system that works collaboratively with a user, achieving the necessary flexibility of behavior through explicit representation, reasoning, and planning with respect to goals and domain knowledge. Collaborativeness is embodied in the system’s commitment to shared goals of producing accurate, well-formed citations; and communication between the parties in both directions, the user providing query information and bibliographic choices to the system, the system providing query status and gathered information to the user. The use of planning technology to implement collaborative interfaces places new requirements on the knowledge representation and planning methods. We presented a set of extensions to classical planning representations and techniques to satisfy these requirements. In particular, the use of an expressive, yet precise and tractable formalism for knowledge representation, PSIPLAN, and the addition of hypothetical planning to integrate domain actions with sensing actions and interleaved execution, were crucial to the implementation of the collaboration. We conducted a laboratory user study to examine the effectiveness of the system. The results indicate the success of this particular interface and its implementation. Users characterized it as a useful and easy-to-learn tool that they would like to have for academic writing. 8. Acknowledgements The research reported in this paper was supported by National Science Foundation grants IRI-9618848 and IIS-9978343 to Harvard University. The authors thank Luke Hunsberger, Wheeler Ruml and Christian Lindig for their assistance in developing the system and for helpful comments on the paper, and all participants of the user study. 9. References
{"Source-Url": "https://dash.harvard.edu/bitstream/handle/1/2252600/Shieber_WritersCollaborative.pdf?sequence=2", "len_cl100k_base": 9095, "olmocr-version": "0.1.53", "pdf-total-pages": 9, "total-fallback-pages": 0, "total-input-tokens": 28832, "total-output-tokens": 10394, "length": "2e13", "weborganizer": {"__label__adult": 0.0005502700805664062, "__label__art_design": 0.0034046173095703125, "__label__crime_law": 0.0004851818084716797, "__label__education_jobs": 0.07598876953125, "__label__entertainment": 0.0005817413330078125, "__label__fashion_beauty": 0.0004210472106933594, "__label__finance_business": 0.0009636878967285156, "__label__food_dining": 0.0005555152893066406, "__label__games": 0.0016498565673828125, "__label__hardware": 0.0014667510986328125, "__label__health": 0.0008835792541503906, "__label__history": 0.0012578964233398438, "__label__home_hobbies": 0.0003604888916015625, "__label__industrial": 0.0005545616149902344, "__label__literature": 0.00460052490234375, "__label__politics": 0.0004775524139404297, "__label__religion": 0.0009326934814453124, "__label__science_tech": 0.2349853515625, "__label__social_life": 0.0009317398071289062, "__label__software": 0.1583251953125, "__label__software_dev": 0.50927734375, "__label__sports_fitness": 0.00035119056701660156, "__label__transportation": 0.0006909370422363281, "__label__travel": 0.0004169940948486328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47934, 0.02169]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47934, 0.48133]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47934, 0.91734]], "google_gemma-3-12b-it_contains_pii": [[0, 1524, false], [1524, 6572, null], [6572, 13799, null], [13799, 20674, null], [20674, 23559, null], [23559, 29548, null], [29548, 35942, null], [35942, 42640, null], [42640, 47934, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1524, true], [1524, 6572, null], [6572, 13799, null], [13799, 20674, null], [20674, 23559, null], [23559, 29548, null], [29548, 35942, null], [35942, 42640, null], [42640, 47934, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47934, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47934, null]], "pdf_page_numbers": [[0, 1524, 1], [1524, 6572, 2], [6572, 13799, 3], [13799, 20674, 4], [20674, 23559, 5], [23559, 29548, 6], [29548, 35942, 7], [35942, 42640, 8], [42640, 47934, 9]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47934, 0.03896]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
ce20cde7612af1156b8149a529faa9567401555b
Assessing the quality of tabular state machines through metrics Citation for published version (APA): Document status and date: Published: 01/01/2017 Document Version: Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication: • A submitted manuscript is the version of the article upon submission and before peer-review. There can be important differences between the submitted version and the official published version of record. People interested in the research are advised to contact the author for the final version of the publication, or visit the DOI to the publisher’s website. • The final author version and the galley proof are versions of the publication after peer review. • The final published version features the final layout of the paper including the volume, issue and page numbers. Link to publication General rights Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of accessing publications that users recognise and abide by the legal requirements associated with these rights. • Users may download and print one copy of any publication from the public portal for the purpose of private study or research. • You may not further distribute the material or use it for any profit-making activity or commercial gain • You may freely distribute the URL identifying the publication in the public portal. If the publication is distributed under the terms of Article 25fa of the Dutch Copyright Act, indicated by the “Taverne” license above, please follow below link for the End User Agreement: www.tue.nl/taverne Take down policy If you believe that this document breaches copyright please contact us at: openaccess@tue.nl providing details and we will investigate your claim. Download date: 02. Oct. 2020 Assessing the quality of tabular state machines through metrics Ammar Osaiweran, Jelena Marincic, Jan Friso Groote ISSN 0926-4515 All rights reserved editors: prof.dr. P.M.E. De Bra prof.dr.ir. J.J. van Wijk Reports are available at: http://library.tue.nl/catalog/TUEPublication.csp?Language=dut&Type=ComputerScienceReports&Sort=Author&level=1 and http://library.tue.nl/catalog/TUEPublication.csp?Language=dut&Type=ComputerScienceReports&Sort=Year&Level=1 Computer Science Reports 17-01 Eindhoven, February 2017 Assessing the quality of tabular state machines through metrics Ammar Osaiweran¹, Jelena Marincic¹, Jan Friso Groote² ¹ASML Netherlands B.V., Veldhoven, The Netherlands ²Eindhoven University of Technology, Eindhoven, The Netherlands Abstract Software metrics are widely used to measure the quality of software and to give an early indication of the efficiency of the development process in industry. There are many well-established frameworks for measuring the quality of source code through metrics, but limited attention has been paid to the quality of software models. In this article, we evaluate the quality of state machine models specified using the Analytical Software Design (ASD) tooling. We discuss how we applied a number of metrics to ASD models in an industrial setting and report about results and lessons learned when collecting these metrics. Furthermore, we recommend some quality limits for each metric and validate them on models developed in a number of industrial projects. 1 Introduction The use of model-based techniques in software development processes has been promoted for many years [24, 3, 4, 5, 1, 12, 6]. The aim is to use the models as a main software artifact in the development process, not only for visualization and communication among developers but also as an important means of specification, formal verification, code generation, testing and validation. The premise is that, by modeling, engineers will focus more on the core software functionality rather than the implementation details. As a crucial part of the modeling paradigm, the code is often automatically generated, implementing the specification of the source model. This automatic construction of source code gives real-world value to the behavior specified in the model. Usually, the transformation to code is hidden from modelers; it is just one more command to execute or a button to click before compilation. Furthermore, for some modeling frameworks, behavioral correctness of models is established by automatic formal verification of which related formal specification is also hidden from end users. Visible to users is only the verification results or counterexamples guiding users when certain properties are violated. The shift from traditional coding towards the model-based development paradigm is becoming very popular and attractive in industry. The reason is that it results in a notable increase of quality and reduction of time to market. Implementation details that support the execution of the core functionality is taken care by the code generator, reducing the time and overhead for error-prone manual implementation, facilitating automatic verification, and increasing overall productivity [23]. In traditional development, source code is the main software artifact. To measure the quality of source code, a number of widely used metrics are utilized, with well-established industrial strength... tools and frameworks, such as TICS [27], CodeSonar [9], SourceMonitor [26] and VerifySoft [28]. Code metrics are useful means to detect decays in architectures and code smells [13] that hinder future evolution and maintenance. However, these frameworks and tools cannot be applied directly to measure the quality of models. They can measure the generated code but it is debatable whether this is meaningful. This is because usually, code generators generate correct and optimal source code tailored to a specific domain and the generated code is often excluded from code analysis tools due to violations and non-adherence to the prescribed coding standards. Therefore, complexity, duplication and other undesired properties must be analyzed at the level of models. Since industry is becoming more and more reliant on software models, there is an urgent need to establish a way for measuring various metrics at the level of models and not at the level of source code. In our industrial context, we use state machines to design and specify reactive and control aspects of software. The behavior of these machines is described using a lightweight formal modeling tool called ASD:Suite. The tool allows specification of state machines in a tabular format. These specifications can be formally verified and corresponding source code can be generated from these verified models [29]. Using ASD:Suite, we can create models but how can we ensure their quality. Because there is no means to measure the quality of these models, a number of challenging questions are raised. How can we evaluate the quality of this type of state machine models? Will we find complex and big models in the software archive? Which factors contribute to the complexity of models? How can these factors be detected and measured? How can we help engineers to improve the quality of their future models? How can we provide to modelers information on deterioration as their models evolve? In this paper we provide answers to the above questions by utilizing a number of software metrics that we tailored and adapted for measuring the quality of ASD models. We discuss a number of observations raised when analyzing metrics of various models. Based on our empirical results, we propose a number of practical thresholds for various metrics. Note that our work is applied to models developed in real industrial projects and real products that are shipped to the market, and not to simple case studies or prototypes. This article is structured as follows. Section 2 discusses related work on model-based development and metrics of state machines. Section 3 introduces ASD to the extent needed for this article. In Section 4 a number of well-known software metrics are detailed with the application to ASD models. Section 5 introduces recommended limits of metrics for good quality models. Section 6 details the data collection process of metrics from models and discusses observations during the data analysis. In Section 7 we conclude our paper highlighting the limitations and future work in this regard. 2 Related work In a previous research at Philips Healthcare [25], guidelines for readability and verifiability of ASD models were introduced. An important guideline is for instance: an ASD tabular model should not include more than 250 rows leading to not more than 3000 lines of generated code. The limitation of this guideline is that it considers only the size of models and generated code while no other complexity factors were addressed. Furthermore, there was no automatic means to calculate the metrics at the level of models. Most recently, a number of metrics such as cyclomatic complexity (CC) [21], Halstead complexity [17] and Zage complexity [31, 30] were applied to SCADE models. The purpose is to establish whether metrics for traditional source code can be used to assess complexity of SCADE model and to detect unavoidable complexity. To estimate the reliability of UML state machines, and to identify failure-prone components, a group of authors [20] measured the cyclomatic complexity of UML state machines. They did not measure the CC directly on state machines, but on the control flow graph generated from their software realization. Similarly, other authors focus on assessing the number of tests. For example, in [15] decision diagrams as intermediate artefacts were used to calculate the number of tests for the code of concurrent state machines. For automatically generated state-machines that contains a large number of states, and that have abstraction levels flattened, the work of [16] proposes a complexity metric to assist in generating hierarchical state-machines from a flat state-machine. A technique for search-based clustering of related states to identify potential superstates is used and then the CC of each cluster is evaluated for a proper choice of super-states. ### 3 Analytical Software Design This section provides a short introduction of the ASD approach and its toolset, the ASD:Suite [7, 19]. ASD is an approach used for building formally verified, component-based systems through the application of formal methods into industrial practice. ASD combines the Box Structure Development Method [22] and the Communicating Sequential Processes (CSP) formalism [18], and uses Failures-divergence refinement FDR2 [11] as a model checker tool for formal verification. Using the ASD:Suite, models of components and interfaces can be described. Two types of models are distinguished which are both state machines specified by a tabular notation: ASD interface models and ASD design models. These models are specified following the Sequence-Based Specification technique, to force consistent and complete specifications [8]. The external behavior (or contract) of a component is specified using an interface model which excludes any internal behavior not seen by client components that use the interface. The interface model is implemented by a design model which typically uses the interfaces of other so-called server components. In ASD we distinguish between two types of components: ASD components and foreign components. An ASD component includes an implemented interface model, a design model, and optional server interface models. A foreign component has only an interface model of which implementation is constructed manually. Formal verification is established by verifying that calls in design models to interfaces of server components are correct, with respect to contracts of the servers. The model checker tool exhaustively searches for illegal interactions, deadlocks or livelocks in the specification. It is also formally checked whether the behaviour of the design model obeys its implemented interface model. Verification starts automatically with the click of a button. In case an error is detected in the ![Example Controller System](image) **Figure 1:** example controller system of automatic door models, the modeler receives a counterexample visualized in a sequence diagram, nicely traceable to the original specification of the model. Besides formal verification, the ASD:Suite allows code generation from design and interface models to a number of languages (C, C++, C#, Java). In ASD, communication between client and server components is asymmetric, using synchronous calls and asynchronous callbacks. A client issues synchronous calls to server components, whereas a server sends callbacks to its clients. Callbacks are stored in a First-In-First-out (FIFO) callback queue. These callbacks are non-blocking and can be received by a component at any time. Note that in ASD:Suite a designer can configure an ASD component to be multi-threaded or single-threaded. Using the multi-threaded option any ASD queue will run in its own thread causing potential thread-switching and interleaving of actions. In our industrial context we always use the single-threaded option which means that actions are executed until completion without any interleaving with other actions of the same or other ASD components. We detail the ASD specification by using a small automatic Door controller example. It consists of a Door controller component that controls a Sensor and a Motor component, see Figure 1. The Controller receives two requests from external clients, namely systemOn to start-up the system and systemOff to shutdown the system. When the system is ON, the controller may receive a callback from the sensor component when there is a detected object. Upon such an event, it issues a command to the motor component to open the door and apply a brake. Then it starts a timer and when it times-out the controller issues a command to release the brake to close the door. This example is used to clarify and illustrate the the interface model in Section 3.1 and the design model in Section 3.2. ### 3.1 ASD Interface Models The interface model is the first artifact that must be specified when creating an ASD component. It describes the formal contract of the component by means of the allowed sequence of calls and callbacks, exchanged with clients. Any internal behaviour not visible to clients is abstracted from the interface specification. Figure 2 depicts the tabular specification of an ASD interface model. The specification lists all implemented interfaces, their events (also called input stimuli), guards or predicates on the events. A sequence of response actions can be specified in the Actions list such as return values or callbacks to clients, and special actions such as Illegal which essentially marks the corresponding event as not allowed in that state. In Figure 2 the interface specification of the Door controller is described. The model contains two states: Off and On. Any ASD model must be complete in the sense that actions for all input stimuli events must be defined. For example in row 3 a systemOn event is accepted and the component will transit to state ON after returning a voidReply to IDoorControlAPI. In row 4 and 7 of Figure 2 the Illegal action is specified denoting that invoking the event is forbidden by clients. Once in the On state, the component accepts a systemOff request and transits back to the Off state. Similarly, Figure 3 depicts the external behavior of the Sensor hardware component, which is strictly alternating between the Active and Inactive states via the startSensing and stopSensing events. In row 10, a so-called internal event is specified denoting that something internal in the device can happen, which is in this case a detectedMovement. As a consequence, the detectedObject callback is sent to the controller and the Sensor remains in the Active state. Via internal events, the interface abstracts from one or more actions that happen internally in the implementation. Or conversely, it is an abstract event that can or must occur which therefore acts as an obligation for any component that implements that interface. 3.2 ASD Design Models The ASD design model implements the interface model and extends it with more detailed internal behavior. The design model is used to specify how the provided interface model is implemented by mapping it to all required (or used) interface models. This means that the design model may include calls to other interface models of other components. Figure 4 depicts the design model of the Door controller. The specification refines the interface model of Figure 2 with all required internal details and uses the interface models of other components such as the Sensor interface model of Figure 3. For example, row 4 specifies that when the Door component receives a systemOn request, it does not only return voidReply to the client, as specified in the interface model, but it also calls a configuration component via the getConfiguration action and asks the Sensor hardware to start monitoring the surroundings via the startSensing action. After that, the controller transits to the DoorClose state. Note that, the call to the configuration is supplied with 2 data parameters namely, speed and time. When the call returns, the component stores their values in the local storage parameters of the component using the $\Rightarrow$ operator, to be retrieved later when needed via $\Leftarrow$ operator. Careful attention and thorough review of the data is needed because checking actual content of the data is excluded from formal verification in ASD. The rest of the specification is self-explanatory. An example of processing a callback that is stored in the ASD queue is depicted in row 13 and 21 where the component may receive a detectedObject and a timeOut callback from the Sensor and the Timer components respectively. 4 Tailoring code metrics for ASD models To measure the quality of ASD models, we tailored a number of metrics that are widely used in industrial practice for measuring the quality of source code like McCabe and Halstead complexity metrics [21, 17]. In this section we introduce these metrics and discuss how we adapt them to measure ASD design and interface models. We start by introducing McCabe cyclomatic complexity metric (CC) and its application to measure complexity of ASD models. Then, we introduce our tailored version of the CC metric and also its application to ASD models. We discuss how both metrics complement each other and how they provide more insights on the complexity of the models. After that we introduce Halstead metrics detailing how they are adapted to measure ASD models. Finally, we present metrics related to formal verification generated by the model checker of ASD:Suite. 4.1 Cyclomatic complexity of ASD models The cyclomatic complexity (CC) metric provides a quantitative measure on the number of linearly independent paths in a program source code, represented by a control flow directed graph [21]. At the time the CC metric was developed, the main purpose was to calculate the minimum number of test cases required to test the independent paths of a program. When the CC metric is high it indicates not only that the number of related test cases is high but also that the program itself is hard to read and understand by developers. To calculate the CC of source code, the program should first be represented as a connected graph. For example, Figure 5 depicts a function foo and its graph representation. The CC of a program can be calculated using the following equation: $$ CC = E - N + 1, $$ where $E$ denotes the number of edges in the graph and $N$ is the total number of nodes. Clearly the CC of the code presented in Figure 5 is: $5 - 5 + 1 = 1$. ![Figure 5: code and its graph representation](image) In a similar way, we can use \( CC \) for code as a basis to calculate the \( CC \) of ASD models. The tabular notation of ASD models can also be seen as a directed graph that contains edges and nodes. Note that, for ASD components we are mainly concerned with the understandability aspect of ASD components rather than testing effort since model checking replaces testing and guarantees that all paths of a model are exhaustively and fully checked. Testing efforts can be of a concern for ASD foreign components since their implementation is handcrafted. To illustrate how \( CC \) can be collected for ASD models, consider the specification depicted in Figure 6. The specification consists of 2 states namely state \( X \) and state \( Y \). In state \( X \), the machine accepts events \( a_1 \), \( a_2 \) and \( a_3 \) via the IF interface and then moves to state \( Y \). The machine stays in state \( Y \) forever accepting \( a_4 \) and \( a_5 \) events. ![Figure 6: An ASD interface model with 2 states and 5 transitions](image) Application to the Door models The \( CC \) of the Door interface model depicted in Figure 2 is 1, while \( CC \) of the design model depicted in Figure 4 is 4. The \( CC \) of the Sensor interface model of Figure 3 is 2. 4.2 Actual (structural) complexity We tailored the \( CC \) metric to collect the so-called Actual (or structural) complexity (ACC) of a model. With the ACC metric we group edges between states. If there are multiple edges between certain states, we only count them as one. This means that in ACC any edge may contain one or more events (a set of events) while in CC each edge has only one event. For example, in Figure 7b, it is possible to transit from state \( X \) to state \( Y \) via either \( a_1, a_2 \) or \( a_3 \) events (one transition labeled by a set of events). In state \( Y \) only \( a_4 \) or \( a_5 \) events are accepted. Note that, the ACC metric does not replace \( CC \) but it complements it by providing additional insight to complexity. It groups events that have similar transitions and identical effect on a state. The metric gives an indication on how complex and difficult it is for a human to read and to understand the model through navigating and memorizing the history of states. The metric is not concerned with the number of tests required to exercise the state machine. ACC can be calculated using the following equation: $$ACC = E_U - N + 1,$$ where $E_U$ denotes the total number of unique edges and $N$ is the total number of nodes. For instance, the ACC of the ASD state machine depicted earlier in Figure 6a can be calculated as follows: $$E_U = 2, N = 2,$$ $$ACC = 2 - 2 + 1 = 1.$$ **Application to the Door models** The ACC of the Door interface model depicted in Figure 2 is 1, while the ACC of the design model depicted in Figure 4 is 4. The ACC of the Sensor interface model of Figure 3 is 1. ### 4.3 Halstead metrics, LoC and maintainability index Using Halstead approach, metrics are collected based on counting operators and operands of source code [17]. We introduce these metrics and discuss how we tailored them to ASD models. Furthermore, we show how the lines of code metric is collected for ASD models. Another metric called the maintainability index can be derived based on Halstead metrics, the lines of code and CC metrics. We show how this metric is calculated for ASD models. We start by introducing Halstead metrics. The metrics measure the cognitive load of a program which is the mental effort used to understand, maintain and develop the program. The higher the load, the more time it takes to design or understand it, and the higher the chances of introducing bugs. Halstead considered programs as implementation of algorithms, consisting of operators and operands. His metrics are designed to measure the complexity of any kind of algorithms regardless of the language in which they are implemented. Halstead metrics use the following basis measures: - $n_1$: the number of unique operators, - $N_1$: the total number of occurrences of operators, - $n_2$: the number of unique operands, - $N_2$: the total number of occurrences of operands, - $n = n_1 + n_2$ which indicates the model vocabulary, - $N = N_1 + N_2$ which denotes the length of the model. For any ASD model we consider the following to be operands: - state variables used as guards, - states of the state machine, - data variables in events and actions. Furthermore, we consider the following to be operators: - events (calls, internal events and stimuli callbacks) and actions (all responses including return values and callbacks), • operators on state variables such as \textit{not}, \textit{and}, \textit{or}, >, <, ==, <=, >=, +, -, and \textit{otherwise} (a keyword denotes the else part of a guard), • operators on data variables such as \texttt{\textasciitilde}, \texttt{<<}, \texttt{>>, >>} (value of variable is stored and retrieved), and $ (literal value a programming language allows). The basic measures are then used to calculate the metrics below: • Volume: $V = N \times \log_2 n$, • Difficulty: $D = (n1/2) \times (N2/n2)$, • Effort: $E = D \times V$ denotes the effort spent to make the model, • Time required to understand the model: $T = (E/18)$ (seconds), • Expected number of Bugs: $B = V/3000$. The volume metric $V$ considers the information content of a program as bits. Assuming that humans use binary search when selecting the next operand or operator to write, Halstead interpreted volume as a number of mental comparisons a developer would need to write a program of length $N$. Program difficulty $D$ is based on a psychology theory that adding new operators, while reusing the existing operands increases the difficulty to understand an algorithm. Program effort $E$ measures the mental effort required to implement or comprehend an algorithm. It is measured in elementary mental discriminations. For each mental comparison (and there are $V$ of them), depending on the difficulty, the human mind will perform several elementary mental discriminations. The rate at which a person performs elementary mental discriminations is given by a Stroud number that ranges between 5 and 20 elements per second. Halstead empirically determined that in the calculation of the time $T$ to understand an algorithm this constant should be adjusted to 18. Finally, the estimated number of bugs $B$ correlates with the volume of the software. The more the size increases, the more the likelihood to introduce bugs. Halstead empirically calculated the estimated number of bugs by a simple division by 3000. We calculate the lines of code metric based on not only the total number of rows in the model but also the number of actions in the Actions list. Therefore, each action counts as 1 line, for instance, the specification of the \textit{Door} interface model contains 4 LoC. The original maintainability index of source code is calculated based on volume, LoC and \textit{CC} of source code [10]. It indicates whether it is worth to keep maintaining, modifying and extending a program or to immediately consider refactoring or redesigning it. $MI$ should be above 85 or not less than 65 in the worst case. The Maintainability Index is defined as follows: $$MI = 171 - 5.2 \times \ln(V) - 0.23 \times CC - 16.2 \times \ln(LOC)$$ Microsoft incorporated the metrics in Microsoft Studio environment with a slight modification to the above formula: $$MI = \max(0, (171 - 5.2 \times \ln(V) - 0.23 \times ACC - 16.2 \times \ln(LOC)) \times 100/171)$$ The formula produces a number between 0 and 100, where 20 or above indicates good and highly maintainable source code. \textbf{Application to the Door models} Table 1 lists the metrics of the three ASD models of the \textit{Door} system. The table is self-explanatory. Notable is the time required to understand the models. The reader of this paper is expected to read and understand the specification of the \textit{Door} design model in about 210 seconds. All models exhibit a maintainability index of 20 and above, hence they are highly maintainable. The rest of the data provided in the table is self-explanatory. ### Table 1: Metrics of Door controller models <table> <thead> <tr> <th>Model</th> <th>Volume</th> <th>Bugs</th> <th>Difficulty</th> <th>Time (s)</th> <th>LoC</th> <th>MI</th> </tr> </thead> <tbody> <tr> <td>Door interface</td> <td>33</td> <td>0.01</td> <td>2</td> <td>4</td> <td>4</td> <td>76</td> </tr> <tr> <td>Door design</td> <td>236</td> <td>0.08</td> <td>16</td> <td>210</td> <td>19</td> <td>55</td> </tr> <tr> <td>Sensor interface</td> <td>56</td> <td>0.02</td> <td>4</td> <td>13</td> <td>6</td> <td>70.5</td> </tr> </tbody> </table> 4.4 Metrics for formal verification overhead ASD uses model checking for formal verification of interface models and design models. The model checking tool produces statistical information about the state space that captures all possible execution scenarios of a model (or a group of communicating models). ![Figure 8: List of models and verification metrics (states and verification time).](image) Figure 8 depicts a screenshot of the results of the formal verification of ASD:Suite. It includes the design model of the Door controller and its used interface models. A green color indicates success of the formal check while red indicates a failing result. As can be seen from the figure, the number of generated states of the design model for the deadlock check is 47 and the time required for all listed checks to complete is less than a minute. These metrics can also be obtained from a file generated by the ASD:Suite when the verification check is accomplished. The deadlock check for the door design model is marked by a green tick sign indicating that the design model is deadlock-free for all possible execution paths. 5 Optimal values and recommended limits of metrics In this section, we propose limits of metrics for good quality interface and design models. The limits were established after carefully analyzing and reviewing over 615 interface and design models. Table 2: Optimal values of metrics for ASD models <table> <thead> <tr> <th>Metric</th> <th>Interface Model</th> <th>Design Model</th> </tr> </thead> <tbody> <tr> <td>CC</td> <td>≤ 30</td> <td>≤ 50</td> </tr> <tr> <td></td> <td>30 ≤ 50 &gt; 50</td> <td>30 ≤ 50 &gt; 50</td> </tr> <tr> <td>ACC</td> <td>≤ 20</td> <td>≤ 40</td> </tr> <tr> <td></td> <td>20 ≤ 40 &gt; 40</td> <td>20 ≤ 40 &gt; 40</td> </tr> <tr> <td>Volume</td> <td>8000 ≤ 14000</td> <td>14000 ≤ 14000</td> </tr> <tr> <td>LoC</td> <td>200 ≤ 400</td> <td>400 ≤ 800</td> </tr> <tr> <td>MI</td> <td>10 ≤ 20</td> <td>20 ≤ 20</td> </tr> <tr> <td>VT</td> <td>≤ 1 min &lt; 5 min</td> <td>≤ 1 min &lt; 5 min</td> </tr> <tr> <td></td> <td>5 min &gt; 5 min</td> <td>5 min &gt; 5 min</td> </tr> </tbody> </table> Table 2 lists all metrics and the advised limits in our industrial context. As can be seen from the table, the limits of the metrics for interface and design models are similar except for the LoC metric. Note that in our industrial context, the CC of a module written in C++ should not exceed 10. If source code exhibits a CC between 10 to 40 then the code should be refactored while if it is more than 40 then the code is end-of-life and has to be rewritten again in a simpler way. This CC limit may vary from one organization to another. The reason that the limit of CC for models is raised compared to the CC for source code is that the metrics are collected at the level of models. We found that the tabular representation of the model raises the abstraction level and increases the understandability of the software artifact compared to source code. Models with a CC less than 30 were easy to understand when reviewing the tabular format of the models. Similarly, we were reasonably comfortable reviewing models that exhibit an ACC of less than 20. For the size metric, we used the limit suggested by VerifySoft [28] and observed that models exceeding 8000 are big in size. Finally, the thresholds of MI were chosen as used by Microsoft. In our industrial context, we recommend that verification time (or waiting time for the model checker during debugging) should not exceed 1 minute. The reason is that we want to prevent that productivity of developers is hindered by the model-checking technology. We want to avoid that a developer fixes an error in the specification and waits for a long time before the model checker succeeds or detects another error (and the behaviour repeats itself causing undesired long waits reducing the productivity of the designer). More important is that this limit is set to also prevent designers from making overly complex specification because they are safe with verification of model-checking. Design and modeling are creative processes and having good metrics of a model does not always mean that the underlying design is good. It is possible that certain models exhibit metrics within the accepted limits while mixing the level of abstractions with inappropriate decomposition of components and mixed responsibilities. Human creativity is still needed to judge whether a design is conceptually acceptable while metrics can help detecting bad smells and decays in the architecture very early. 6 Detailed data analysis In this section we detail the application of the proposed metrics and the recommended limits to measure and evaluate the existing ASD models, see Table 3. In order to make the process of data analysis and collection of the models more efficient, we built a tool that automatically extracts the metrics and visualize the results graphically. The tool is compatible with ASD:Suite version 9.2.7. We used the tool to extract metrics from 615 ASD interface and design models, developed in four different projects, within the period of 2008 until the end of 2015. <table> <thead> <tr> <th>Metric</th> <th>Interface Models</th> <th>Design Models</th> </tr> </thead> <tbody> <tr> <td># of models</td> <td>348</td> <td>267</td> </tr> <tr> <td>Average CC</td> <td>18</td> <td>39.4</td> </tr> <tr> <td>Average ACC</td> <td>4.5</td> <td>11</td> </tr> <tr> <td>Total Volume</td> <td>204,593</td> <td>3,533,640</td> </tr> <tr> <td>Total LoC</td> <td>12,580</td> <td>205,772</td> </tr> <tr> <td>Total C++ LoC</td> <td>55,710</td> <td>611,724</td> </tr> </tbody> </table> Table 3: Summary of statistical data of developed models Table 3 provides collected metrics data about the models. The total number of interface models is 348 while there are 267 design models. Row 3 and 4 list the average CC and ACC measures for the models. In row 5 the total volume or size of models is depicted. Row 6 lists the total number of lines of code in the models while the last row lists the total number of lines of the generated C++ code excluding blank lines. <table> <thead> <tr> <th>Metric</th> <th>Limit</th> <th>Interface models</th> <th>Design models</th> <th>Percentage</th> </tr> </thead> <tbody> <tr> <td>CC</td> <td>≤ 30</td> <td>299</td> <td>178</td> <td>77.56%</td> </tr> <tr> <td></td> <td>(30, 50)</td> <td>24</td> <td>26</td> <td>8.13%</td> </tr> <tr> <td></td> <td>&gt; 50</td> <td>25</td> <td>63</td> <td>14.31%</td> </tr> <tr> <td>ACC</td> <td>≤ 20</td> <td>333</td> <td>231</td> <td>91.71%</td> </tr> <tr> <td></td> <td>(20, 40)</td> <td>7</td> <td>17</td> <td>3.9%</td> </tr> <tr> <td></td> <td>&gt; 40</td> <td>5</td> <td>19</td> <td>4.4%</td> </tr> <tr> <td>Volume</td> <td>&lt; 8K</td> <td>344</td> <td>181</td> <td>85.37%</td> </tr> <tr> <td></td> <td>(8K, 14K)</td> <td>3</td> <td>17</td> <td>3.26%</td> </tr> <tr> <td></td> <td>&gt; 14K</td> <td>1</td> <td>69</td> <td>11.4%</td> </tr> <tr> <td>LOC</td> <td>&lt; 200</td> <td>338</td> <td>182</td> <td>84.55%</td> </tr> <tr> <td></td> <td>(200, 400)</td> <td>5</td> <td>14</td> <td>3.08%</td> </tr> <tr> <td></td> <td>&gt; 400</td> <td>5</td> <td>71</td> <td>12.36%</td> </tr> <tr> <td>VT</td> <td>&lt; 1 min</td> <td>348</td> <td>266</td> <td>99.84%</td> </tr> <tr> <td></td> <td>(1 min, 5 min)</td> <td>0</td> <td>1</td> <td>0.16%</td> </tr> <tr> <td></td> <td>&gt; 5 min</td> <td>0</td> <td>0</td> <td>0%</td> </tr> </tbody> </table> Table 4: Analysis of metrics values We separated ASD interface models from design models and then carefully evaluated them in isolation. After that, we ordered the models according to CC, ACC and volume, to sort the models based on their complexity and size. The purpose of sorting the models is to capture the complex and big models that are present in our archive to refactor and improve these models. The data analysis of these models is summarized in Table 4. In summary, the analysis revealed that over 22% of the models are relatively complex based on the CC metric and the models should be refactored to reduce complexity. Considering the ACC metric over 10% of the models should be refactored to simpler models. We discuss the relation between CC and ACC shortly. With respect to size we considered the volume and LoC metrics. Over 15% of the models are big in size and should be split into smaller models. Similarly, over 15% of the models include many lines of code. Most of these big models exhibit also high complexity metrics; therefore, improving one metric will consequently improve the other metrics. All models were verified in less than 1 minute except one model which took about 5 minutes from the model checker. This model is also the biggest and the most complex model compared to others. The reason that all models were verified in a short time is that the execution of the components is configured to be single-threaded; therefore there is no concurrency that leads to the generation of big state spaces. The data and results of our analysis are communicated to the development teams together with the metric extraction tool to facilitate repeating the experiments. The teams appreciated the work since it helped them uncover hidden complex and big models although controlled empirical validation of the metrics are planned for future. A team of one of the projects planned refactoring tasks to gradually improve the quality of complex models. For newly started projects, developers frequently check the quality of their models to address any issue early during the modeling phase and before final delivery of the models. ![Figure 9: Representing a stateless machine as a flower-shape (CC) or a mouse ear (ACC)](image) One observation during the data analysis is that not all models with high CC are really complex to understand. We discuss this observation by comparing CC and ACC of an example specification and discuss how the ACC metrics provided more insight on complexity. Consider Figure 9. At the left of the figure a stateless machine accepts $N$ events. If we set $N$ to 31 (meaning that 31 different events are accepted by the machine) then $CC = 31$ while $ACC = 1$. Therefore, from CC perspective the state machines is considered to be complex since it exceeded the complexity limit we set before as a guideline. ![Figure 10: Complexity of interface models of components sorted by ACC](image) In fact, all models that exhibit a flower-shape behavior are not very complex but they may be rather big because the interface is verbose with many events. These machines are relatively simple to understand since they just consume input events in a single state. This type of models exhibit a relatively very low \( \text{ACC} \) metric. Correlating \( \text{CC} \) and \( \text{ACC} \) can help developers detecting interfaces that include many different events that have actually the same behavior. In hindsight, it indicates to developers the need to split the interface early and categorize the events into smaller models. Figure 10 depicts the \( \text{CC} \) and \( \text{ACC} \) of interface models of a number of components in one project. \textit{Comp08} in the figure gives an example of a flower-shaped interface model with high \( \text{CC} \) and low \( \text{ACC} \). By reviewing the contents of the model we realized that the interface contains many events that should be categorized and split into smaller interface models. Notable are \textit{Comp05} and \textit{Comp06} which exhibit similar metrics. After reviewing the models we found that they are exact copies (they model 2 physical sensors of the same type with different \textit{ids}). An action was taken to combine the two models in one and parametrize the \textit{ids} of the sensors. We observed that Halstead T and E metrics are very controversial. We found that these metrics provide good estimates for models that are within the recommended size limit of 8000. For some models that exceed this limit the metrics are not very accurate. Empirical experiments are needed to adapt the formula for this type of models. 7 Conclusions and future work As industry is rapidly migrating towards model-based development, it is becoming urgent to establish means to measure the quality of models since they form the main software artifact in the modeling paradigm. In this article we proposed a number of metrics to measure the quality of ASD models which are state machines specified in a tabular format. An apparent limitation of our work is that we only considered the structural complexity of models. The added complexity of introducing guards in the specification is not considered. In fact, guards can have a similar effect in complexity as introducing states. For some developers, specifications with guards are relatively more complex to understand than specifications without guards. Future empirical evaluation is needed to validate this observation. The metrics and the limits proposed in this article are constructed based on consensus and alignment with the majority of ASD designers through a number of meetings and interviews. The designers applied the metrics and the limits to their own developed models. As a future work we are planning to validate the metrics and the limits by executing controlled empirical experiments with a set of models selected from different projects. For further validation we want to answer a number of questions like: is it always the case that any model big in size is complex (and vice versa)? Which metric contributes more to the number of bugs in the field? Size or complexity? How is McCabe’s \( \text{CC} \) metric correlated to Halstead’s difficulty metric? Shall we pay more attention to one of them or both? How can we re-calibrate the expected number of bugs of Halstead given that models are formally verified? Another interesting direction is to correlate these metrics to software quality attributes such as extensibility, scalability, testability and verifiability. Another future direction is to detect similarities in the models caused by duplicating guards or responses events in the actions list. As highlighted previously in the paper, we accidentally detected clones between models by observing the plots of complexity. In the future we are investigating other systematic means to detect clones between models (part of a model is included in another model). Furthermore, modularity metrics will be introduced to indicate the degree of coupling and cohesion among the models. Finally, the results of this work reveal the importance and need for metrics at the model level. Based on the metric feedback, and subsequent review of the flagged models, interesting patterns and opportunities for model improvement were identified. Moreover, the results reveal that more work is needed to extend the set of metrics making them also less sensitive or biased for certain patterns and aspects. Acknowledgment We would like to thank Sven Weber for his constructive and valuable comments to the article. References 16 If you want to receive reports, send an email to: wsinsan@tue.nl (we cannot guarantee the availability of the requested reports). **In this series appeared (from 2012):** <table> <thead> <tr> <th>Date</th> <th>Author(s)</th> <th>Title</th> </tr> </thead> <tbody> <tr> <td>12/01</td> <td>S. Cranen</td> <td>Model checking the FlexRay startup phase</td> </tr> <tr> <td>12/02</td> <td>U. Khadim and P.J.L. Cuijpers</td> <td>Appendix C / G of the paper: Repairing Time-Determinism in the Process Algebra for Hybrid Systems ACP</td> </tr> <tr> <td>12/03</td> <td>M.M.H.P. van den Heuvel, P.J.L. Cuijpers, J.J. Lukkien and N.W. Fisher</td> <td>Revised budget allocations for fixed-priority-scheduled periodic resources</td> </tr> <tr> <td>12/04</td> <td>Ammar Osaiweran, Tom Fransen, Jan Friso Groote and Bart van Rijnsoever</td> <td>Experience Report on Designing and Developing Control Components using Formal Methods</td> </tr> <tr> <td>12/05</td> <td>Sjoerd Cranen, Jeroen J.A. Keiren and Tim A.C. Willemse</td> <td>A cure for stuttering parity games</td> </tr> <tr> <td>12/06</td> <td>A.P. van der Meer</td> <td>CIF MSOS type system</td> </tr> <tr> <td>12/07</td> <td>Dirk Fahland and Robert Prüfer</td> <td>Data and Abstraction for Scenario-Based Modeling with Petri Nets</td> </tr> <tr> <td>12/08</td> <td>Luc Engelen and Anton Wijs</td> <td>Checking Property Preservation of Refining Transformations for Model-Driven Development</td> </tr> <tr> <td>12/09</td> <td>M.M.H.P. van den Heuvel, M. Behnam, R.J. Bril, J.J. Lukkien and T. Nolte</td> <td>Opaque analysis for resource-sharing components in hierarchical real-time systems - extended version –</td> </tr> <tr> <td>12/10</td> <td>Milosh Stoljki, Pieter J. L. Cuijpers and Johan J. Lukkien</td> <td>Efficient reprogramming of sensor networks using incremental updates and data compression</td> </tr> <tr> <td>12/11</td> <td>John Businge, Alexander Serebrenik and Mark van den Brand</td> <td>Survival of Eclipse Third-party Plug-ins</td> </tr> <tr> <td>12/12</td> <td>Jeroen J.A. Keiren and Martijn D. Klabbers</td> <td>Modelling and verifying IEEE Std 11073-20601 session setup using mCRL2</td> </tr> <tr> <td>12/13</td> <td>Ammar Osaiweran, Jan Friso Groote, Mathijs Schuts, Jozef Hooman and Bart van Rijnsoever</td> <td>Evaluating the Effect of Formal Techniques in Industry</td> </tr> <tr> <td>12/14</td> <td>Ammar Osaiweran, Mathijs Schuts, and Jozef Hooman</td> <td>Incorporating Formal Techniques into Industrial Practice</td> </tr> <tr> <td>13/01</td> <td>S. Cranen, M.W. Gazda, J.W. Wesselinck and T.A.C. Willemse</td> <td>Abstraction in Parameterised Boolean Equation Systems</td> </tr> <tr> <td>13/02</td> <td>Neda Noroozi, Mohammad Reza Mousavi and Tim A.C. Willemse</td> <td>Decomposability in Formal Conformance Testing</td> </tr> <tr> <td>13/03</td> <td>D. Bera, K.M. van Hee and N. Sidorova</td> <td>Discrete Timed Petri nets</td> </tr> <tr> <td>13/04</td> <td>A. Kota Gopalakrishna, T. Ozcelebi, A. Liotta and J.J. Lukkien</td> <td>Relevance as a Metric for Evaluating Machine Learning Algorithms</td> </tr> <tr> <td>13/05</td> <td>T. Ozcelebi, A. Wefflers-Albu and J.J. Lukkien</td> <td>Proceedings of the 2012 Workshop on Ambient Intelligence Infrastructures (WAmI)</td> </tr> <tr> <td>13/06</td> <td>Lofti ben Othmane, Pelin Angin, Harold Weffers and Bharat Bhargava</td> <td>Extending the Agile Development Process to Develop Acceptably Secure Software</td> </tr> <tr> <td>13/08</td> <td>Mark van den Brand and Jan Friso Groote</td> <td>Software Engineering: Redundancy is Key</td> </tr> <tr> <td>13/09</td> <td>P.J.L. Cuijpers</td> <td>Prefix Orders as a General Model of Dynamics</td> </tr> <tr> <td>Date</td> <td>Authors</td> <td>Title</td> </tr> <tr> <td>------</td> <td>---------</td> <td>-------</td> </tr> <tr> <td>14/01</td> <td>Jan Friso Groote, Remco van der Hofstad and Matthias Raffelsieper</td> <td>On the Random Structure of Behavioural Transition Systems</td> </tr> <tr> <td>14/02</td> <td>Maurice H. ter Beek and Erik P. de Vink</td> <td>Using mCRL2 for the analysis of software product lines</td> </tr> <tr> <td>14/03</td> <td>Frank Peeters, Ion Barosan, Tao Yue and Alexander Serebrenik</td> <td>A Modeling Environment Supporting the Co-evolution of User Requirements and Design</td> </tr> <tr> <td>14/04</td> <td>Jan Friso Groote and Hans Zantema</td> <td>A probabilistic analysis of the Game of the Goose</td> </tr> <tr> <td>14/05</td> <td>Hrishikesh Salunkhe, Orlando Moreira and Kees van Berkel</td> <td>Buffer Allocation for Real-Time Streaming on a Multi-Processor without Back-Pressure</td> </tr> <tr> <td>14/06</td> <td>D. Bera, K.M. van Hee and H. Nijmeijer</td> <td>Relationship between Simulink and Petri nets</td> </tr> <tr> <td>14/07</td> <td>Reinder J. Bril and Jinkyu Lee</td> <td>CRTS 2014 - Proceedings of the 7th International Workshop on Compositional Theory and Technology for Real-Time Embedded Systems</td> </tr> <tr> <td>14/08</td> <td>Fatih Turkmen, Jerry den Hartog, Silvio Ranise and Nicola Zannone</td> <td>Analysis of XACML Policies with SMT</td> </tr> <tr> <td>14/09</td> <td>Ana-Maria Şutîi, Tom Verhoeoff and M.G.J. van den Brand</td> <td>Ontologies in domain specific languages – A systematic literature review</td> </tr> <tr> <td>14/10</td> <td>M. Stolikj, T.M.M. Meyfroyt, P.J.L. Cuipers and J.J. Lukkien</td> <td>Improving the Performance of Trickle-Based Data Dissemination in Low-Power Networks</td> </tr> <tr> <td>15/01</td> <td>Önder Babur, Tom Verhoeoff and Mark van den Brand</td> <td>Multiphysics and Multiscale Software Frameworks: An Annotated Bibliography</td> </tr> <tr> <td>15/02</td> <td>Various</td> <td>Proceedings of the First International Workshop on Investigating Dataflow in Embedded computing Architectures (IDEA 2015)</td> </tr> <tr> <td>15/03</td> <td>Hrishikesh Salunkhe, Alok Lele, Orlando Moreira and Kees van Berkel</td> <td>Buffer Allocation for Realtime Streaming Applications Running on a Multi-processor without Back-pressure</td> </tr> <tr> <td>15/05</td> <td>Sarmen Keshishzadeh and Jan Friso Groote</td> <td>Exact Real Arithmetic with Perturbation Analysis and Proof of Correctness</td> </tr> <tr> <td>15/06</td> <td>Jan Friso Groote and Anton Wijs</td> <td>An O(m log n) Algorithm for Stuttering Equivalence and Branching Bisimulation</td> </tr> <tr> <td>17/01</td> <td>Ammar Osaiweran, Jelena Marincic and Jan Friso Groote</td> <td>Assessing the quality of tabular state machines through metrics</td> </tr> </tbody> </table>
{"Source-Url": "https://pure.tue.nl/ws/files/69144112/CSR_17_01.pdf", "len_cl100k_base": 11861, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 48706, "total-output-tokens": 13780, "length": "2e13", "weborganizer": {"__label__adult": 0.00034928321838378906, "__label__art_design": 0.00043082237243652344, "__label__crime_law": 0.0003063678741455078, "__label__education_jobs": 0.0008549690246582031, "__label__entertainment": 5.8591365814208984e-05, "__label__fashion_beauty": 0.00016367435455322266, "__label__finance_business": 0.00023496150970458984, "__label__food_dining": 0.0003190040588378906, "__label__games": 0.0005879402160644531, "__label__hardware": 0.0010995864868164062, "__label__health": 0.0005173683166503906, "__label__history": 0.0002384185791015625, "__label__home_hobbies": 0.00010502338409423828, "__label__industrial": 0.00045418739318847656, "__label__literature": 0.0002435445785522461, "__label__politics": 0.00022494792938232425, "__label__religion": 0.00044655799865722656, "__label__science_tech": 0.0252532958984375, "__label__social_life": 7.671117782592773e-05, "__label__software": 0.004543304443359375, "__label__software_dev": 0.96240234375, "__label__sports_fitness": 0.0002810955047607422, "__label__transportation": 0.0005593299865722656, "__label__travel": 0.00017344951629638672}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54360, 0.04802]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54360, 0.39636]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54360, 0.88477]], "google_gemma-3-12b-it_contains_pii": [[0, 2138, false], [2138, 2663, null], [2663, 2663, null], [2663, 5593, null], [5593, 9522, null], [9522, 12536, null], [12536, 16530, null], [16530, 17661, null], [17661, 20232, null], [20232, 22434, null], [22434, 24877, null], [24877, 28440, null], [28440, 30226, null], [30226, 33621, null], [33621, 37344, null], [37344, 38973, null], [38973, 43099, null], [43099, 45829, null], [45829, 48124, null], [48124, 51951, null], [51951, 54360, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2138, true], [2138, 2663, null], [2663, 2663, null], [2663, 5593, null], [5593, 9522, null], [9522, 12536, null], [12536, 16530, null], [16530, 17661, null], [17661, 20232, null], [20232, 22434, null], [22434, 24877, null], [24877, 28440, null], [28440, 30226, null], [30226, 33621, null], [33621, 37344, null], [37344, 38973, null], [38973, 43099, null], [43099, 45829, null], [45829, 48124, null], [48124, 51951, null], [51951, 54360, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54360, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54360, null]], "pdf_page_numbers": [[0, 2138, 1], [2138, 2663, 2], [2663, 2663, 3], [2663, 5593, 4], [5593, 9522, 5], [9522, 12536, 6], [12536, 16530, 7], [16530, 17661, 8], [17661, 20232, 9], [20232, 22434, 10], [22434, 24877, 11], [24877, 28440, 12], [28440, 30226, 13], [30226, 33621, 14], [33621, 37344, 15], [37344, 38973, 16], [38973, 43099, 17], [43099, 45829, 18], [45829, 48124, 19], [48124, 51951, 20], [51951, 54360, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54360, 0.28239]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
2f477924d070769190196ff9d661c2fecf016893
Featured Model-based Mutation Analysis Devroey, Xavier; Perrouin, Gilles; Papadakis, Mike; Legay, Axel; Schobbens, Pierre-Yves; Heymans, Patrick Published in: Proceedings of the 38th international conference on Software Engineering DOI: 10.1145/2884781.2884821 Publication date: 2016 Document Version Peer reviewed version Link to publication Citation for published version (HARVARD): Featured Model-based Mutation Analysis Xavier Devroey PRECISE Research Center University of Namur, Belgium xavier.devroey@unamur.be Gilles Perrouin PRECISE Research Center University of Namur, Belgium gilles.perrouin@unamur.be Mike Papadakis SnT, SERVAL Team University of Luxembourg michail.papadakis@uni.lu Axel Legay INRIA Rennes, France axel.legay@inria.fr Pierre-Yves Schobbens PRECISE Research Center, University of Namur, Belgium pierre-yves.schobbens@unamur.be Patrick Heymans PRECISE Research Center, University of Namur, Belgium patrick.heymans@unamur.be ABSTRACT Model-based mutation analysis is a powerful but expensive testing technique. We tackle its high computation cost by proposing an optimization technique that drastically speeds up the mutant execution process. Central to this approach is the Featured Mutant Model, a modelling framework for mutation analysis inspired by the software product line paradigm. It uses behavioural variability models, viz., Featured Transition Systems, which enable the optimized generation, configuration and execution of mutants. We provide results, based on models with thousands of transitions, suggesting that our technique is fast and scalable. We found that it outperforms previous approaches by several orders of magnitude and that it makes higher-order mutation practically applicable. Keywords Mutation Analysis, Variability, Featured Transition Systems CCS Concepts • Software and its engineering → Software testing and debugging; Software product lines; • General and reference → Performance; 1. INTRODUCTION Mutation analysis is an established technique for either evaluating test suites’ effectiveness [5, 24, 50] or supporting test generation [23, 50, 54]. It works by injecting artificial defects, called mutations, into the code or the model under test, yielding mutants, and measures test effectiveness based on the number of detected mutants. Researchers have provided evidence that detecting mutants results in finding real faults [5, 33] and that tests designed to detect mutants reveal more faults than other test suites. This has been shown to be the case for model-based mutation too: Aichernig et al. [1] report that model mutants lead to tests that are able to reveal implementation faults that were neither found by manual tests, nor by the actual operation, of an industrial system. In addition, model-based mutation’s premise is to identify defects related to missing functionality and misinterpreted specifications [13]. This is desirable since code-based testing fails to identify these kinds of defects [28, 62]. Despite its power, mutation analysis is expensive, due to the large number of mutants that need to be generated and assessed with the candidate test cases. While this problem has been researched for code-based mutation, e.g., [32, 55], it remains open in the model-based context. Since typical real-world models involve thousands of mutants and test suites involve thousands of test cases, millions of test executions are needed. Addressing this problem is therefore vital for the scalability of mutation. This a known issue that requires further research, as pointed out in the surveys of Jia and Harman [31], and Offutt [50]. To address this problem, we take inspiration from past research on software product lines (SPL). As suggested in our vision paper [18], we propose an approach to model mutants as members (also called variants or products) of an SPL. Considering mutants as part of a family rather than in isolation yields a considerable advantage: shared execution at the model level [15]. This contrasts with existing SPL approaches [36, 37, 48] which require code and hence do not apply to model mutants. The key idea of our approach is to encode the mutants as products of an SPL. To do so, we use a Feature Diagram (FD) [34] together with a Featured Transition System (FTS) that represent the variations (i.e., applications of mutation operators) and the behaviour of the mutants, respectively. FTSs have been proposed by Classen et al. [15] to compactly model the behaviour of an SPL. They consist of a Transition System (TS) where each transition has been tagged to indicate which products are able to execute the transition. We use FTS to embed all the mutants in one model, called the Featured Mutants Model (FMM). To optimise test execution, we rely on the FMM to: (i) only execute tests with mutants that are reachable by the tests, (ii) share common transitions among multiple executions and, (iii) merge different executions that reach previously visited states. Therefore, instead of performing multiple runs, i.e., executing a test against each mutant, we perform a single execution of the FMM. We performed an empirical evaluation which demonstrated that FMM: (i) yield significant execution speedups, i.e., from 2.5 to 1,000 times faster compared to previous approaches; (ii) make mutation analysis applicable to models much larger than those used in previous studies; and (iii) make higher-order mutation feasible. In summary, the contributions of this paper are: - FMM, a compact model which allows to easily generate and configure mutants (of any order) of a transition system. - An implementation of FMM in the Variability Intensive Behavioural teSting (ViBES) framework [19] making it the first mutation testing tool for behavioural models that supports higher-order mutation. Our implementation is publicly available at: https://projects.info.unamur.be/vibes/. - A shared execution technique that allows executing tests with all relevant mutants in a single run. To the authors’ knowledge, this is the first approach that optimizes model-based mutation analysis. - An empirical evaluation on a mix of real-world and generated models. - Empirical results that contradict the general belief that “higher order mutation testing is too computationally expensive to be practical” [30]. Instead, they suggest that it can be applied to real-world systems. The rest of this paper is organised as follows: Section 2 recalls the main concepts of mutation testing and variability modelling; Sections 3 and 4 present our approach and results, respectively. Finally, Section 5 discusses related work and Section 6 concludes the paper. 2. BACKGROUND 2.1 Transition Systems In this paper, we consider transition systems as a fundamental formalism to express system behaviour. Our definition is adapted from [6], where atomic propositions have been omitted (we do not consider state internals): **Definition 1 (Transition System (TS)).** A TS is a tuple \( (S, \text{Act}, \text{trans}, i) \) where \( S \) is a set of states, \( \text{Act} \) is a set of actions, \( \text{trans} \subseteq S \times \text{Act} \times S \) is a transition relation such that the TS is deterministic (with \( (s_1, \alpha, s_2) \in \text{trans} \) sometimes denoted \( s_1 \xrightarrow{\alpha} s_2 \)), and \( i \in S \) is the initial state. As a convention, we start and end executions in the initial state. This ensure that they are finite. Fig. 1(a) presents a (simple) example TS of a payment operation on a Card Payment Terminal (CPT). The model starts in the initial (\( \text{Init} \)) state where the card holder has to insert his card. The CPT will select a means of payment (e.g., Visa, Mastercard, American Express, etc.) and negotiate with the card chip to agree on a protocol for the transaction. Transactions can be either performed on-line or off-line using a PIN code or a signature. Once the card holder has been identified, the CPT will perform the transaction off-line or on-line (and in this case, it will contact the card issuer to authorize the transaction) and update the information on the card chip. Once the transaction has been completed (or aborted), the card holder may remove her card from the CPT. In model-based testing [61], test cases are derived from such a model of the system. For instance, a test case is \( \text{atc} = (\text{insert\_card}, \text{select\_app}, \text{negociate\_with\_card}, \text{abort}, \text{remove\_card}) \) for the TS of Fig. 1(a). Test selection can be guided by coverage criteria. For instance, the all-actions coverage criterion specifies that all the actions of the considered TS must appear in at least one of the selected abstract test cases. In this paper, we do not consider test concretization (see e.g. [44]). 2.2 Mutation Testing In model-based testing, mutants are introduced based on model transformation rules that alter the system specification. These rules are called mutation operators. An example of mutant obtained from the state missing operator applied on the Go\_offline state of the CPT system, is presented in Fig. 1. There are two kinds of mutants, first-order mutants when the original and the mutant models differ by a single model transformation, and higher-order mutants, derived from the original model after multiple transformations. When a mutant is detected by a test case, it is called killed. In the opposite situation, it is called live. In our case, a mutant is killed if a test case cannot be executed. For instance, the test case \( \text{tc} = (\text{insert\_card}, \text{select\_app}, \text{negociate\_with\_card}, \text{check\_PIN\_online}, \text{go\_offline}, \text{update\_card\_info}, \text{remove\_card}) \) will kill the mutant of Fig. 1(b) since it fails to execute completely. A test case that can be completely executed on a mutant will not detect (kill) it, e.g., the test case \( \text{atc} \) defined in Section 2.1 will leave the mutant of Fig. 1(b) live because it can be executed completely. To measure the adequacy of testing, a standard metric called mutation score is used. It is defined as the ratio of mutants killed by the test set under assessment to the total number of considered mutants. To calculate the mutation score, one has to execute the whole test set against every selected mutant. In our case, we consider deterministic TS and stop the execution of a test case as soon as the TS is unable to fire the next action. For the test case \( \text{tc} \) on the mutant in Fig. 1(b), the execution is stopped when it reaches | Table 1: Summary of model-based mutation approaches for behavioural model. | |---|---|---|---|---| | Reference | Year | Employed Models | Av. tool | HOM | | Fabbri et al. [22] | 1999 | statechart | - | - | | Offutt et al. [49] | 2003 | statechart | - | - | | Belli et al. [9] | 2006 | finite state automata & statechart | - | - | | Belli et al. [8] | 2011 | finite state automata & statechart | - | ✓ | | Aichernig et al. [1] | 2014 | State Machines | ✓ | - | | Aichernig et al. [2] | 2014 | State Machines | - | - | | Lackner & Schmidt [40] | 2014 | State Machines | - | - | | Aichernig et al. [3] | 2015 | State Machines | - | - | | Krenn et al. [39] | 2015 | State Machines | ✓ | - | | This paper | 2016 | Transition Systems | ✓ | ✓ | the CH Verified state as it may not execute the next action (go offline) in tc and the mutant TS is considered killed by tc. The mutant would have been kept live if another test case tc’ had followed the “online path” after CH Verified. Mutant execution is a time-consuming task [31], especially for large models. In our experiments, it took 3 days to run the mutants of our model with 10,000 states against each test case. The times reported in Table 6 are for running one mutant against one test case. In the following, we will call this approach of executing each test against each mutant separately, the enumerative approach. Related studies on model-based mutation approaches for behavioural model are briefly described in Table 1: publication, year of publication, model types used, available tool and the use of Higher-Order Mutation (HOM). In literature, most of the existing approaches have been evaluated based on small models using a brute force technique that executes all mutants with all tests. This results in extremely long execution times and hinders scalability (in space and/or execution time). We believe that tool scalability, and the lack of available tools, are the main reasons why there are few model-based testing studies and they mostly use small models. In their recent survey, Jia and Harman [31] motivate the need for additional research on using mutation on program artefacts other than code. We believe that, since our tool is publicly available and scales well, it will foster experimentation on model-based mutation. ### 2.3 Variability Modelling SPL engineering is a sub-discipline of software engineering based on the idea that we can build products (aka members) of the same family by systematically reusing software assets. Some assets are common to all members, whereas others are only shared by a subset of the family. Such variability is commonly captured by the notion of feature, defined as a unit of difference between products. Individual features can be specified using languages such as UML, and their relationships by Feature Diagrams (FDs) [34]. An example of FD is provided at the top of Fig. 2. In this figure, the root feature \( m \) has 3 sub-features \( \text{smi}, \text{aex}, \text{wts} \) connected using a xor operator. FDs have their semantics defined in terms of valid products, i.e., legal combinations of features. In the FD of Fig. 2, a valid product is \( \{ m, \text{smi}, \text{smi}_\text{Go_offline} \} \) while the product \( \{ m, \text{smi}, \text{smi}_\text{Go_offline}, \text{aex}, \text{aex}_\text{issuer accepts} \} \) is invalid because it does not respect the xor constraints. FD semantics is formal [58] and FDs can be encoded as boolean constraints. Thus, SAT or BDD solvers are commonly used to enumerate products or to check their validity. The main challenge in SPL engineering is to deal with the combinatorial explosion induced by the number of possible products \( 2^N \) for \( N \) features in the worst case. FTSs address this problem and enable the efficient behavioural model checking of SPLs [15]. FTSs are Transition Systems (TSs) where each transition is labelled with a feature expression specifying which products of the SPL can execute the transition. A FTS is thus a compact representation of the behaviour of an SPL: **DEFINITION 2 (FEATURED TRANSITION SYSTEM [15]).** A Featured Transition System (FTS) is a tuple \((S, \text{Act}, \text{trans}, \text{init}, d, \gamma)\), where: \( S, \text{Act}, \text{trans} \) are defined according to definition 1; \( d \) is an FD; \( \gamma : \text{trans} \to [d] \to \{ \text{true}, \text{false} \} \) is a labelling function specifying for each transition which valid products may execute it; this function is represented as a boolean expression over the features of \( d \); and init : \( S \to ([d] \to \{ \text{true}, \text{false} \}) \) a total function that indicates if a state \( i \in S \) is an initial state for a product \( p \in [d] \), such that for every product \( p \in [d] \) there is exactly one initial state, which allows one to model mutants that change the initial state of the system. A FTS example is provided at the bottom of Fig. 2. Transition \[ \text{CH verified} \xrightarrow{\text{go offline}/\sim \text{smi}_\text{Go offline}} \text{Go offline} \] may only be executed in products with a valid configuration where the \text{smi}_\text{Go offline} feature is not selected. **Figure 1:** Card Payment Terminal: the original system and a mutant (state Go offline) 3. COMPACT MUTANTS MODEL The key idea behind our approach is to represent mutants as a family of variations of the System Under Test (SUT). We model the SUT’s behaviour using a TS, called the original TS (to distinguish it from the mutant TS). It is possible to model these variants as an FTS and its corresponding FD, where each feature corresponds to one application of one mutant operator on the original TS. The FTS and the FD represents all the possible mutants of an original TS and is called the Featured Mutants Model (FMM). For example, the FMM of Fig. 2 has an FD (at the top) with 3 mutation operators: the state missing (SMI) operator, which produces a mutant where one state is missing; the action exchange (AEX) operator, which produces a mutant where one transition has its action changed (to another action); and the wrong initial state (WIS) operator, which produces a mutant where the initial state has been set to another state. In this instance of the FD, the SMI operator has been applied twice (smi_issuer_accepts, smi_GO_offline), and the AEX and WIS operators have been applied one time each (aex_issuer_accepts, wis_Card_in). This FD represents four mutants, where at most one leaf feature is selected. The FTS at the bottom of Fig. 2 represents all the possible variations, corresponding to the four mutation operators, of the original TS. In order to derive one particular mutant (TS) from the FMM, one may use the FTS projection operator [15]. Practically, this operator will first need a valid product representing the desired mutant, e.g., \( p = \{ m, smi, smi_{GO\_offline}\} \); then, each feature expression of the FTS is evaluated with features belonging to the product replaced by true, and other features replaced by false; finally, transitions with a feature expression evaluated to false (i.e., where \( \gamma_p = false \)) are removed from the FTS, and the initial state is set to the only state such that the feature expression on the initial transition is true (i.e., where \( init(i, p) = true \)). For instance, the projection of the FMM of Fig. 2 on \( p \) will produce the mutant TS of Fig. 1(b). 3.1 Building the Featured Mutants Model We rely on the state-of-the-art operators proposed by Fabbri et al. [22] to generate mutants from a TS: - SMI State Missing operator removes a state (other than the initial state) and all its incoming/outgoing transitions; - WIS Wrong Initial State operator changes the initial state; - AEX Action Exchange operator replaces the action linked to a given transition by another action; - AMI Action Missing operator removes an action from a transition; - TMI Transition Missing operator removes a transition; - TAD Transition Add operator adds a transition between two states; - TDE Transition Destination Exchange operator modifies the destination of a transition. Each operator can be used to generate mutants using the enumerative approach, where each mutant is formed as a new variation of the original TS (possibly introducing non-determinism with AEX and TAD operators), or using the FMM approach, where each mutant is an addition to the FD. We detail hereafter the mutant generation procedures. **Enumerative approach:** In the enumerative approach, each operator \( op \) is defined as a model transformation with input a TS \( ts \) representing the behaviour of the SUT. It produces another (mutant) TS \( ts_m \) representing the result of an operator on \( ts \). For instance, AEX operator, shown on the left of Fig. 3, replaces the action \( a \) on transition \( s_1 \rightarrow s_2 \) by \( b \). Algorithm 1 details the enumerative approach where the set of mutants \( \{ muts \} \) is produced by applying each operator (in \( Ops \)) with random parameters a number of times (defined for each operator by the \( times \) function) on the original TS (line 4). Algorithm 1 Mutant generation, enumerative approach Require: $ts = (S, Act, trans, i)$ \{original TS\} $Ops \{\text{set of operators to use}\} \quad times : Op \rightarrow \mathbb{N}$ \{\text{function specifying for each operator the number of applications}\} Ensure: $return = muts \{\text{set of produced mutants}\}$ 1: $muts \leftarrow \emptyset$ 2: for all $op \in Ops$ do 3: for all $i$ between 1 and $\text{times}(op)$ do 4: $muts \leftarrow muts \cup op(\text{random}(ts))$ 5: end for 6: end for 7: return $muts$ Algorithm 2 Mutant generation, FMM approach Require: $ts = (S, Act, trans, i)$ \{original TS\} $Ops_{fmm} \{\text{set of operators to use}\}$ $\text{times}_{fmm} : Op_{fmm} \rightarrow \mathbb{N}$ \{\text{function specifying for each operator the number of applications}\} Ensure: $fmm = (fs_{fmm}, fd_{fmm}) \{\text{FMM representing the mutants}\}$ 1: $\gamma \leftarrow (\lambda t \rightarrow \text{true})$ 2: $fs_{fmm} \leftarrow (S, Act, trans, i, fd_{fmm}, \gamma)$ 3: $fd_{fmm} \leftarrow m \{\text{initialised to root feature m}\}$ 4: for all $op_{fmm} \in Ops_{fmm}$ do 5: for all $i$ between 1 and $\text{times}(op)$ do 6: $fmm \leftarrow op_{fmm}(fmm)$ 7: end for 8: end for 9: return $fmm$ FMM approach: In the FMM approach, an operator ($Op_{fmm}$) is defined as a model transformation of a FMM (representing existing mutants), that produces a FMM representing the mutants (the previously existing mutants and) the result of the $Op_{fmm}$ mutation on the original TS (obtained in the FMM’s FTS by replacing the features by false in the feature expressions). For instance, on the right of Fig. 3, the AEX$_{fmm}$ operator replaces the action $a$ on transition $s_1 \xrightarrow{a} s_0$ of the base model by $b$ as follows: 1. adding the feature expression $\neg aex$ on transition $s_1 \xrightarrow{a/\gamma_2} s_0$, stating that $s_1 \xrightarrow{a/\neg aex/\gamma_2} s_0$ may be fired only if the $aex$ mutation is inactive (and if $\gamma_2$ is true); 2. adding a transition $s_1 \xrightarrow{b/\neg aex/\neg aex/\gamma_2} s_0$, stating that the transition is fired with a $b$ action only if the $aex$ mutation is active (and if $\gamma_2$ is true); 3. adding an $aex$ feature to $fd_{fmm}$ representing the mutation done by $Op_{fmm}$ (not shown in Fig. 3). Algorithm 2 details the automated FMM building approach. We start with the original TS (line 2) and a $\gamma$ function that labels each transition with a true feature expression (line 1). We then apply mutation operators ($Ops_{fmm}$) a specified number of times ($\text{times}(op)$ line 5). Contrary to the enumerative approach, the mutation operators are applied on the FMM under construction, which is reused in the next iteration (line 6). This is mandatory as the FMM contains all the previous mutations that are taken into account in the model transformations (e.g., the $\gamma_i$ expressions in Fig. 3). As we choose to only perform $Op_{fmm}$ mutations on the original TS, this forbids operator composition on (previously) mutated elements. Doing so ensures that first-order mutation maps to only one edit of the original TS. Further details about the operators and specificities of the transformations can be found on the VIBeS website [17] in a technical note. 3.2 Featured Mutants Model Execution In our context, test cases are defined as a sequence of actions in a TS ($ts$), such that one execution form a path starting from and ending at the initial state ($i$) [20]: $tc = (\alpha_1, ..., \alpha_n)$ such that $3(i \xrightarrow{\alpha_1} s_k, ..., s_1 \xrightarrow{\alpha_n} i)$. Recall that in the enumerative approach, if a test case cannot be executed by the mutant (denoted $m \xrightarrow{tc/FMM}$) or does not end in the initial state (considered as the accepting state), it is considered killed. Otherwise, the mutant is considered live. The set of live mutants, according to $tc$ and the mutant set $muts$, is defined as: $$\text{liveEnum}(muts, tc) = \{m \in muts \mid m \xrightarrow{tc}\}$$ In the FMM approach, a test case can be executed on an FMM’s ($fmm$) FTS (noted $fs_{fmm}$), if there exists at least one mutant able to execute it. The enumerative approach executes each test case on each mutant separately. In contrast, one execution of a test case on the FMM explores all the reachable mutants (identified by the collected feature expression $\gamma$). The set of live mutants in the FMM approach is defined as: $$\text{liveFMM}(fmm, tc) = \{p \in [fd_{fmm}] \mid fs_{fmm}^p \xrightarrow{tc/FMM}\}$$ Concretely, all possible paths in $fs_{fmm}$ starting from $i$ and ending in $i$ will be considered, which allows to deal with possible non-determinism introduced by a mutation. The live mutants are those able to execute at least one of those paths, i.e., those for which the product $p$ satisfies all the feature expressions on the transitions of the considered path. For instance, the test case: $$tc = \text{(insert_card, select_app, negotiate_with_card, check_PIN_offline, go_offline, update_card_info, remove_card)}$$ Executing the FMM of Fig. 2, it will fire the following transitions: \[ \text{Card}_\text{in} \xrightarrow{\neg \text{wis}_\text{Card}_\text{in}} \text{Init}, \text{Init} \xrightarrow{\text{insert_card}} \text{Card}_\text{in}, \text{Card}_\text{in} \xrightarrow{\text{select_app}} \text{App}_\text{uninit}, \text{App}_\text{uninit} \xrightarrow{\text{negociate_with_card}} \text{App}_\text{init}, \text{App}_\text{init} \xrightarrow{\text{check_PIN}_\text{offline}} \text{CH}_\text{verified}, \text{CH}_\text{verified} \xrightarrow{\text{go_offline}} \text{Go}_\text{offline}, \text{Go}_\text{offline} \xrightarrow{\text{update_cardinfo}} \text{Completed}, \text{Completed} \xrightarrow{\text{remove_card}} \text{Init} \] These transitions may only be fired by mutants for which all the features expressions are true. In such a case, mutants need to respect the following constraint: \[ \neg \text{wis}_\text{Card}_\text{in} \land \neg \text{smi}_\text{go_offline} \] All mutants in the FD of Fig. 2 that satisfy this feature expression remain live after the execution of tc. The set of mutants killed by the test case is computed using the conjunction of \( f_{\text{fts}} \) and the negation of this feature expression: \[ f_{\text{fts}} \land (\neg \text{wis}_\text{Card}_\text{in} \lor \neg \text{smi}_\text{go_offline}) \] This corresponds to the set of mutants \[ \{(m, \text{wis}_\text{Card}_\text{in}, m, \text{smi}, \text{smi}_\text{go_offline})\} \] In practice, liveFMM \((f_{\text{fts}}, tc)\) will produce a feature expression representing all the live mutants as detailed in Algorithm 3. Initially, the algorithm computes all the paths in \( f_{\text{fts}} \) corresponding to the sequence of actions in tc (line 2). For one path, the conjunction of the feature expressions gives the mutants able to execute this path (line 4). Effort is saved this way by ignoring unreachable mutants and by sharing the execution of the common transitions. This conjunction disjuncts with the conjunctions of the others paths to get the feature expression representing all the live mutants (line 4). This step results in savings due to merging of the considered executions. For performance reasons, the \textit{paths} variable uses a tree representation to merge common prefixes of different paths. We implemented the different mutant operators described in Section 3.1 in order to perform classical mutation testing (enumerative approach) as well as FMM generation and execution in VIBeS, our Variability Intensive Behavioural testing Java framework [17]. 3.3 FMMs as Higher-Order Mutants Model Higher-order mutants can be valuable since some of them tend to be hard to kill [25]. However, the number of mutants grows exponentially according to the order \( n \) and explode the involved cost. This is obvious in Algorithm 1, for the enumerative approach, which generates all the \( n-1 \) mutants to generate the \( n \)-order ones. Using the FMM approach, modelling higher-order mutation comes at (nearly) no cost. In a FMM \((f_{\text{fts}}, f_{\text{domms}})\), the set of allowed mutants (i.e., variations in \( f_{\text{fts}} \)) is represented by the feature diagram \((f_{\text{domms}})\). For instance, the constraints in the \( f_{\text{domms}} \) of Fig. 2 allows to have exactly one mutant at a time. Meaning that all valid mutants (products) of this FMM will have at most one variation from the original TS made by a mutation operator, e.g., Fig. 1(b) has (only) \( \text{smi}_\text{go_offline} \) feature active. The \( n \) order mutants are represented by modifying the constraints on the \( f_{\text{domms}} \) so that they have exactly \( n \) mutations at a time. It means that generating the FMM using Algorithm 2 will also generate the FTS (which will be the same) for order 1 to \( n \) FMMs. For instance, the card payment terminal has the same FTS, for all orders as shown in Fig. 2, but differ on the FD that is described by Fig. 4 by the group cardinality stating that exactly 2 subfeatures have to be selected. The FMM will compactly represent all the \( C_2^2 = 6 \) 2-order mutants. **All-order mutants:** Using the same argument, we generalize to higher-order mutants. In this case, the FMM represents a single model with all possible \( n \) orders of mutants (with \( n \) between 1 and the number of possible mutants which is the number of leaf features in the FMM’s FD). By setting the group cardinalities of the FD in Fig. 4 to \([1..\ast]\). A valid product (mutant) of the FD will contain at least one applications of mutation operator, e.g., a product \( p = \{m, \text{smi}_\text{go_offline}\} \), but also \( p' = \{m, \text{smi}_\text{go_offline}, \text{smi}_\text{NO_GO}\} \), or \( p'' = \{m, \text{smi}_\text{go_offline}, \text{wis}_\text{Card}_\text{in}\} \), etc. In this case, the FMM compactly represent all the \( \sum_{k=1}^n C_k^2 = 15 \) \( n \)-order mutants. The number of live mutants after the execution of a test case (tc) on a FMM \((f_{\text{domms}})\) can be obtained by counting the number of SAT solutions (i.e., the number of possible assignments for each feature) to \( f_{\text{domms}} \land \text{liveFMM}(f_{\text{fts}}, tc) \). Where \( f_{\text{domms}} \) is the FMM’s FD encoded as a boolean formula, i.e., the disjunction of the mutation operator \((\text{Ops})\): \[ f_{\text{domms}} = \bigwedge_{o \in \text{Ops}} o \land \left( \bigvee_{tc \in ts} \text{liveFMM}(f_{\text{fts}}, tc) \right) \] 4. EVALUATION We formulate our research questions as follows: **RQ1** How does the FMM scheme compare with the “enumerative approach” in terms of execution time? **RQ2** Is higher-order mutation under the FMM scheme tractable? 4.1 Setup We compare two test execution approaches: the enumerative approach, which is the classical mutation testing approach used by previous research [2] where each test case is executed against each mutant, and the FMM approach, where each test case is executed (only once) on the FMM. **Models:** We consider models from different sources with varying size. Table 2 details the employed models. For each model, we measure: the number of states (States); the number of transitions (Trans.); the number of actions (Act.); the average degree of the different states that correspond to the average number of incoming or outgoing transitions per state (Avg. deg.): the maximal number of states between the initial state and another state when traversing the TS in breadth-first search (BFS height); the number of transitions starting from a state and ending in another state with a lower level when traversing the TS in breadth-first search (Back lvl tr.). Our models are: the soda vending machine model (S. V. Mach.) which is a small example modelling the behaviour of a machine selling soda and tea [14]; the mine pump (Minepump) that models the behaviour of a pump which has to keep a mine safe from flooding by pumping water from a sink while avoiding methane explosions [14]; the Claroline website (Claroline) that represents the navigational usages of the online course management platform used at the University of Namur (http://webcampus.fundp.ac.be). It has been reverse-engineered from an Apache log using a 2-gram inference method [19,59]; the WordPress models (AGE-RR, Elsa-RR, and Elsa-RRN) that represent the navigational usage of two different WordPress instances. They are also reverse-engineered using a 2-gram inference method. For the AGE-RR and Elsa-RR, we considered only the request type (e.g., POST, GET, HEAD) and the requested resource (e.g., `*/index.php`) in the sequences used. For the Elsa-RRN model, we considered the request type, the requested resource and the parameter names (e.g., `?page=`) in the sequences used as input of the 2-gram inference method [59]. The random model has been generated based on the following procedure: a) we generate a set of random graphs (basically directed arcs and nodes) and compute the different measures from Table 2 (except number of actions) on them; b) we selected those graphs that are likely to represent a real system according to Pelánek [56], i.e., those having a small average degree, a large BFS height and a small number of back level edges (in this order); c) we applied a random labelling multiple times and computed the occurrence probability, i.e., the probability of the labels to be present a real system according to Pelánek [56], i.e., those having a small average degree, a large BFS height and a small number of back level edges (in this order); c) we applied a random labelling multiple times and computed the occurrence probability, i.e., the probability of the labels to be present; d) we selected the TS that had the following properties: the probability of the most occurring label in the TS was less than or equal to 20% [57]; e) we ended up with one random model as recorded in Table 2. ### Test Cases For every model, we generate one set of tests using random walks on the TS and one set satisfying the all-actions criterion. The test sets were then executed with the enumerative and the FMM processes. Table 3 records the average size (and standard deviation) of the randomly generated test cases, the size of the generated all-actions coverage-driven test set and the average size (and standard deviation) of its test cases. The size of the random test set is arbitrarily fixed to 100 test cases. ### Model Mutants We used the operators presented in Section 3.1. Operators modifying states (WIS and SMI) or transitions (TMI, AEX, TDE, TAD, and AMI), resp., were applied arbitrarily for 1/10 of the number of states or transitions, resp., in the model (with 1 as bottom value). Since the operands are randomly chosen, we forbid multiple applications of any operator on the same operands to avoid duplicated mutants [52]. Table 4 presents the number of mutants generated per operator for the studied models. ### Mutant Execution To avoid execution time bias from the underlying machines, we execute each test case 3 times with each considered mutant (for the enumerative version) and on the FMM (for the family version). Experimentation was performed on an Ubuntu 14.04 LTS Linux (kernel 3.13) machine with Intel Core i3 (3.10GHz) processor and 4GB of memory. The complete experiment took approximately 2 weeks. ### 4.2 Results and Discussion Fig. 5 presents the distribution of the test execution time (in logarithmic scale on the y axis) for each studied model with a box plot. The first two columns represent the total execution time taken by each test case when executed on the live mutants and on the killed mutants according to the enumerative approach. The third box presents the execution time of the FMM (FMM approach). Note that while the killed mutants do not require a complete execution in the enumerative approach, it is required for the FMM mutants. This might provide an advantage to the enumerative approach. To assess this, we consider the killed and the live mutants separately. In all cases, we measure only the execution of the models, avoiding time bias due to I/O operations. As the execution time of a test case partially depends on its size, the high number of outliers in Fig. 5 is explained by the variation of the test cases sizes. Tables 5 and 6 record different statistics over the execution time of the models in μ-seconds. For the enumerative approach, executing a test case on mutants that will remain live takes more time than executing the same test cases on mutants that are killed. This was expected since killed mutants do not require a complete execution of the test case. In both cases, the FMM execution runs faster, i.e., running a test case on all the mutants at once is very fast, despite the more complex (needed) exploration of the FMM’s FTS. Regarding **RQ1**, the box plots of Fig. 5 and the values of Tables 5 and 6 confirm that the execution time required by the FMM approach is considerably lower than the time required by the enumerative approach. The difference escalate to several orders of magnitude when considering live mutants. The difference between family-based and enumerative approaches increases with the size of the model, indicating the improved scalability of our approach. To evaluate the statistical significance, we use a Wilcoxon rank-sum test for the different models we considered: we obtain a *p*-value of 1.343e-09 for the random model and *p*-values smaller than 2.2e-16 for the other models, confirming the hypothesis that FMM significantly outperforms the enumerative approach, when considering 0.001 significance level. ### 4.2.1 All-Order Mutation Table 7 presents the number of all-order mutants for our models, the number of mutants live after executing the random and all-actions test sets (computed using SAT4J 2.3.5), and their mutation score. **Mem. Overflow** denotes an overflow during SAT solving, improving this step by, for instance, reducing the boolean formula to process is part of our future work. Columns 5 and 8 give the SAT-solving computation time (we set a timeout of 12 hours). Overall, our results suggest that higher-order mutation under the FMM scheme is tractable, answering positively to **RQ2**. In particular, all-order mutation achieves very good mutation scores (*MS ≥ 0.99*) when compared to first-order mutation when this score can be computed. In our future work, we intend to: (i) improve the scalability of mutation score computation; and (ii) assess the practical relevance of higher-order in test sets comparison. Only one mutant is live for the soda vending machine and the mine pump models. This mutant is a first order mutant resulting from the TAD operator. Indeed, the TAD operator adds new transition which cannot be detected by test cases solely generated from the original TS, since this transition does not exist in this model. All-order mutation enables to quickly kill mutants of any order an to focus on the interesting ones from a selective mutation perspective. For example, the 2916 remaining live mutants resulting from the execution of the all-action test suite are relevant to study the mutation operators involved. Of course, they can also be --- **Figure 5**: Execution Time: time required by a test case to execute with live, killed mutants and the FMM mutants, time is measured in μsec. Table 5: Mutant (1st order) execution time in $\mu$-seconds: minimal, maximal, median, mean, standard deviation for every test case on all live and killed mutants of the enumerative method and of the FMM. Mutation score (MS) of the all actions and random test sets are provided for each model. <table> <thead> <tr> <th>Model</th> <th>All act. MS</th> <th>Random MS</th> </tr> </thead> <tbody> <tr> <td></td> <td>Live. m.</td> <td>Killed m.</td> </tr> <tr> <td>S. V. Mach. model (all act. MS: 0.85 ; random MS: 0.85 )</td> <td>57</td> <td>21</td> </tr> <tr> <td>Max.</td> <td>442</td> <td>83</td> </tr> <tr> <td>Median</td> <td>113</td> <td>38</td> </tr> <tr> <td>Mean</td> <td>120</td> <td>39</td> </tr> <tr> <td>S.Dev.</td> <td>43</td> <td>35</td> </tr> <tr> <td>Claroline (all act. MS: 0.07 ; random MS: 0.27 )</td> <td>40,314</td> <td>236</td> </tr> <tr> <td>Max.</td> <td>103,346</td> <td>19,282</td> </tr> <tr> <td>Median</td> <td>53,951</td> <td>58</td> </tr> <tr> <td>Mean</td> <td>57,000</td> <td>280</td> </tr> <tr> <td>S.Dev.</td> <td>13,060</td> <td>710</td> </tr> <tr> <td>Elsa-RR model (all act. MS: 0.75 ; random MS: 0.49 )</td> <td>20,743</td> <td>775</td> </tr> <tr> <td>Max.</td> <td>59,237</td> <td>3,109</td> </tr> <tr> <td>Median</td> <td>22,676</td> <td>191</td> </tr> <tr> <td>Mean</td> <td>27,000</td> <td>230</td> </tr> <tr> <td>S.Dev.</td> <td>8,500</td> <td>1,500</td> </tr> <tr> <td>Minepump model (all act. MS: 0.60 ; random MS: 0.82 )</td> <td>441</td> <td>43</td> </tr> <tr> <td>Max.</td> <td>623</td> <td>212</td> </tr> <tr> <td>Median</td> <td>533</td> <td>108</td> </tr> <tr> <td>Mean</td> <td>530</td> <td>100</td> </tr> <tr> <td>S.Dev.</td> <td>35</td> <td>49</td> </tr> </tbody> </table> Table 6: Mutant execution time in $\mu$-seconds <table> <thead> <tr> <th>Model</th> <th>All act. MS</th> <th>Random MS</th> </tr> </thead> <tbody> <tr> <td></td> <td>Live. m.</td> <td>Killed m.</td> </tr> <tr> <td>Random Model (all act. MS: 0.16 ; random MS: 0.63 )</td> <td>327,418</td> <td>24,675</td> </tr> <tr> <td>Max.</td> <td>2,552,363</td> <td>60,354</td> </tr> <tr> <td>Median</td> <td>1.3e+06</td> <td>1,500</td> </tr> <tr> <td>Mean</td> <td>1.3e+06</td> <td>1,800</td> </tr> <tr> <td>S.Dev.</td> <td>560,000</td> <td>3,500</td> </tr> </tbody> </table> Table 7: All-order mutation score. For each test set and model, the table records the number of possible mutants (# mut.), the number of live mutants after the test set execution (#Lv.), the mutations score (MS) and the SAT computation time (T) in seconds. <table> <thead> <tr> <th>Model</th> <th># mut.</th> <th>All act. MS</th> <th>#Lv.</th> <th>MS</th> <th>T</th> </tr> </thead> <tbody> <tr> <td>S. V. Mach.</td> <td>127</td> <td>1</td> <td>0.99</td> <td>1.10</td> <td>1</td> </tr> <tr> <td>Minepump</td> <td>8,388,607</td> <td>1</td> <td>&gt;0.99</td> <td>1.84</td> <td>1</td> </tr> <tr> <td>Claroline</td> <td>5.49e+303</td> <td>1</td> <td>Timeout</td> <td>Timeout</td> <td></td> </tr> <tr> <td>AGE-RR</td> <td>4.71e+956</td> <td>1</td> <td>Timeout</td> <td>Timeout</td> <td></td> </tr> <tr> <td>Elsa-RR</td> <td>1.46e+194</td> <td>2916</td> <td>&gt;0.99</td> <td>37.78</td> <td>144</td> </tr> <tr> <td>Elsa-RRN</td> <td>7.61e+286</td> <td>36</td> <td>&gt;0.99</td> <td>150.32</td> <td>16</td> </tr> <tr> <td>Random</td> <td>2.62e+2577</td> <td>Mem. overflow</td> <td>Mem. overflow</td> <td></td> <td></td> </tr> </tbody> </table> used to generate test cases killing them in order to augment the test suite. Exploring all-order MS in selective mutation or test case generation scenarios are part of our future work. 4.2.2 Threats to Validity Internal Validity: Our experiments were performed on 7 models: 2 academic examples (the soda vending machine and the the mine pump), 4 larger real-world models (Claroline, AGE-RR, Elsa-RR, and Elsa-RRN) and a randomly generated one. These models come from different sources and represent different kinds of systems: embedded systems designed by an engineer and web-based applications where the model has been reverse-engineered from a running instance. The random model was built upon a set of generated TSs in order to match the real system state-space measures, as described by Pelánek [56, 57]. Construct Validity: We chose to apply mutants for 1/10 of the states and/or transitions of the mutated model. This might result in more (or less) mutants than needed for the larger models. However, this is expected when using mutation. Additionally, since model-based mutation is applied to the system’s abstraction, abstract actions represent many concrete actions. It is therefore important to ensure a good coverage of most of the model actions. TS and FTS executions are different, and do not use the same algorithms. In order to decrease the bias in measuring execution time, both executions of the models have been done using VIBes [17], our Variability Intensive Behavioural Testing framework Java implementation. The two execution classes are different but use a variant of the same algorithm described in Section 3.2. Moreover, we used the Stopwatch Java class to measure the call to the execute method (i.e., model loading and result writing time have been omitted). Finally, we ran each test case 3 times on each mutant model (classical and FTS) to avoid bias due to process concurrency. External Validity: We cannot guarantee that our results are generalizable to all behavioural models. However, we recall the diversity of the model sources (hand-crafted, reverse-engineered, and randomly generated to match real system state-space) as well as the diversity of considered systems. 5. RELATED WORK Program mutation was proposed as a rigorous testing technique [12]. The idea was then applied to test specification models [50] and recently to resolve software engineering problems such as the improvement of non-functional properties [42], locating [53] and fixing software defects [43]. Here we briefly discuss works related to model-based mutation and testing, and code-based mutation. 5.1 Model-Based Mutation The idea of model-based mutation has been elicited by Gopal and Budd [13] who called it “Specification Mutation”. Specification mutation promises to identify defects related to missing functionality and misinterpreted specifications [13]. This is desirable since these kinds of defects cannot be identified by any code-based testing technique [28,62], including code-based mutation. Gopal and Budd [13] studied mutation for specifications expressed in logic. Similarly, Woodward [63] mutated and experimented with algebraic specifications. Mutating models like finite state machines and Statecharts has also been done by Fabbri et al. [21]. Hierons and Merayo [27] used Probabilistic Finite State Machines. All these studies suggested a set of operators and report some exploratory results. Amman et al. [4] suggested comparing the original and the mutated specification models using a model checker in order to generate counterexamples. These can then be used as test cases for the system under test. Black et al. [10] defined a set of operators based on empirical and theoretical analysis. They also defined a process of using them based on the SMV model checker. Contrary to our approach, none of these methods considers the mutation efficiency. Recent research focuses on mutating behavioural models. Aichernig et al. [2,3] defined UML state machines mutant operators and used them to insert faults in the models of an industrial system. These were used to design tests. The approach has a formal ground but neither considers optimising the test execution, nor higher-order mutation. Belli and Beyazit [8,9] compare event-based and state-based model mutation testing. Both approaches were found to have similar fault detection capabilities. The authors also report that it seems more promising to perform higher-order mutation than first-order mutation but did not provide evidence in support of this argument. Krems et al. [39] made available their MoMuT tool, but it is dedicated to test generation and not mutant execution as our approach. In their most recent work [38], they use an idea similar to FMM by triggering mutations during exploration of the model, avoiding execution of similar prefixes in different mutants. Additionally, MoMuT does not support higher-order mutation. Other applications of model-based mutation are to test model transformations and test configurations. Mottu et al. [47] defined a fault model relevant to the model transformation process based on which they propose a set of mutant operators. Henard et al. [26] define mutant operators for feature models. Along the same lines, Lackner and Schmidt [40] define mutant operators for the mappings of features with other model artifacts. Finally, Papadakis et al. [51] demonstrated that model-based mutation of the combinatorial interaction testing models has a higher correlation with the actual fault detection than the use of combinatorial interaction testing. Thereby, they provide ground to the argument that model-based mutation might be more effective than the other model-based testing methods. 5.2 Model-Based Testing Offutt et al. [49] define test criteria for state-based specifications. They also describe techniques to automatically generate tests based on these criteria. Lackner et al. [41] suggested a test generation approach for product lines. Similar to our work they combine feature diagrams with state machines to handle the product line variability. However, their approach does not perform mutation and it is specific to software product lines. Briand et al. [11] proposed a technique for generating tests from statecharts. Their results were validated through code-based mutation. 5.3 Code-Based Mutation In the context of code-based mutation, executable mutants are needed. This introduces a compilation overhead which is proportional to the number of mutants. To reduce this cost, Untch et al. [60] proposed mutant schemata, an approach that replaces the program operators with schematic functions. These functions introduce the mutants at runtime and thus, only one compilation is needed. Ma et al. [45] suggested using bytecode translation, a technique that introduces the mutants directly at the bytecode level and thus avoid multiple compilations. To reduce the test execution overhead, several optimizations have been proposed. Delamaro and Maldonado [16] suggested recording the execution trace of the original program and consider only the mutants that are reachable by each of the employed tests. Along the same lines, Mateo and Polo [46] suggested stopping mutant executions when they cause infinite loops. Jackson and Woodward [29] suggested parallelizing the mutant execution process. Kapoor and Bowen [35] proposed ordering the mutants in such a way that the test execution is minimized. Papadakis and Malevris [55] used mutant schemata to identify mutants that are reached and infected by the considered tests. They then reduce test execution by considering only the mutants that cause infection. This technique was later evaluated by Just et al. [32] who found that it reduces test execution by 40%. 6. CONCLUSION This paper presents a family-based approach to model-based mutation testing, named Featured Mutant Model. It allows to generate mutants of any order and assess test effectiveness via an optimised execution scheme. Testing behavioural models with FMM is a completely automated process that involves no extra manual or computational effort over previous approaches. In short, the use of FMM has the following benefits: (i) it can easily reason about and generate behavioural mutants, (ii) it can significantly speed up the evaluation of test suites against mutants (up to 1,000 times) and (iii) it can efficiently perform higher-order mutation. But, obviously, this is not the end of the story. In our future work, we will further investigate scalability issues regarding all-order mutation analysis to be able to compute mutation score for the largest models. This implies the optimisation of the boolean formulas or approximate computation heuristics. Finally, since “mutants are a valid substitute for real faults” [33], we envision to develop test case generation techniques based on mutation coverage of the FMM. Acknowledgements The authors would like to thank Maxime Cordy, Bernhard Aichernig, and the reviewers for their helpful comments, the AGE (University of Namur) and the Elsa organisation for providing logs of WordPress sites. This research was partly funded by Walloon region under the INOGRAMS project (n°7171). Mike Papadakis is supported by the National Research Fund, Luxembourg, INTER/MOBILITY/14/7562175. 7. REFERENCES
{"Source-Url": "https://pure.fundp.ac.be/ws/files/13634077/icse16_mutants_exec.pdf", "len_cl100k_base": 12876, "olmocr-version": "0.1.50", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 52561, "total-output-tokens": 17577, "length": "2e13", "weborganizer": {"__label__adult": 0.0002970695495605469, "__label__art_design": 0.00032210350036621094, "__label__crime_law": 0.00024080276489257812, "__label__education_jobs": 0.000759124755859375, "__label__entertainment": 5.2094459533691406e-05, "__label__fashion_beauty": 0.00014007091522216797, "__label__finance_business": 0.00016129016876220703, "__label__food_dining": 0.0002753734588623047, "__label__games": 0.0005407333374023438, "__label__hardware": 0.0005817413330078125, "__label__health": 0.0003440380096435547, "__label__history": 0.0002137422561645508, "__label__home_hobbies": 7.718801498413086e-05, "__label__industrial": 0.00026869773864746094, "__label__literature": 0.0002384185791015625, "__label__politics": 0.00018417835235595703, "__label__religion": 0.0003445148468017578, "__label__science_tech": 0.01146697998046875, "__label__social_life": 7.712841033935547e-05, "__label__software": 0.005523681640625, "__label__software_dev": 0.97705078125, "__label__sports_fitness": 0.00023055076599121096, "__label__transportation": 0.000354766845703125, "__label__travel": 0.0001722574234008789}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 62433, 0.06151]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 62433, 0.26923]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 62433, 0.85937]], "google_gemma-3-12b-it_contains_pii": [[0, 681, false], [681, 5451, null], [5451, 11640, null], [11640, 16183, null], [16183, 20054, null], [20054, 25128, null], [25128, 31489, null], [31489, 36712, null], [36712, 39484, null], [39484, 44826, null], [44826, 51529, null], [51529, 56997, null], [56997, 62433, null]], "google_gemma-3-12b-it_is_public_document": [[0, 681, true], [681, 5451, null], [5451, 11640, null], [11640, 16183, null], [16183, 20054, null], [20054, 25128, null], [25128, 31489, null], [31489, 36712, null], [36712, 39484, null], [39484, 44826, null], [44826, 51529, null], [51529, 56997, null], [56997, 62433, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 62433, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 62433, null]], "pdf_page_numbers": [[0, 681, 1], [681, 5451, 2], [5451, 11640, 3], [11640, 16183, 4], [16183, 20054, 5], [20054, 25128, 6], [25128, 31489, 7], [31489, 36712, 8], [36712, 39484, 9], [39484, 44826, 10], [44826, 51529, 11], [51529, 56997, 12], [56997, 62433, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 62433, 0.16511]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
a82c3d7c50a228ff63cb420c0366a891cce034a9
The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies Marquardt, Nicolai; Diaz-Marino, Robert; Boring, Sebastian; Greenberg, Saul Publication date: 2011 Document version: Peer reviewed version Citation for published version (APA): The Proximity Toolkit: Prototyping Proxemic Interactions in Ubiquitous Computing Ecologies Nicolai Marquardt\textsuperscript{1}, Robert Diaz-Marino\textsuperscript{2}, Sebastian Boring\textsuperscript{1}, Saul Greenberg\textsuperscript{1} \textsuperscript{1} Department of Computer Science University of Calgary, 2500 University Drive NW Calgary, AB, T2N 1N4, Canada [nicolai.marquardt, sebastian.boring, saul.greenberg]@ucalgary.ca \textsuperscript{2} SMART Technologies 3636 Research Road NW Calgary, AB, T2L 1Y1, Canada robdiaz-marino@smarttech.com Figure 1. Left: three entities – person, tablet and vertical surface; Center: proxemic relationships between entities, e.g., orientation, distance, pointing rays; Right: visualizing these relationships in the Proximity Toolkit’s visual monitoring tool. ABSTRACT People naturally understand and use proxemic relationships (e.g., their distance and orientation towards others) in everyday situations. However, only few ubiquitous computing (ubicomp) systems interpret such proxemic relationships to mediate interaction (proxemic interaction). A technical problem is that developers find it challenging and tedious to access proxemic information from sensors. Our Proximity Toolkit solves this problem. It simplifies the exploration of interaction techniques by supplying fine-grained proxemic information between people, portable devices, large interactive surfaces, and other non-digital objects in a room-sized environment. The toolkit offers three key features. 1) It facilitates rapid prototyping of proxemic-aware systems by supplying developers with the orientation, distance, motion, identity, and location information between entities. 2) It includes various tools, such as a visual monitoring tool, that allows developers to visually observe, record and explore proxemic relationships in 3D space. (3) Its flexible architecture separates sensing hardware from the proxemic data model derived from these sensors, which means that a variety of sensing technologies can be substituted or combined to derive proxemic information. We illustrate the versatility of the toolkit with proxemic-aware systems built by students. ACM Classification: H5.2 [Information interfaces]: User Interfaces – input devices and strategies, prototyping. General terms: Design, Human Factors Keywords: Proximity, proxemics, proxemic interactions, toolkit, development, ubiquitous computing, prototyping. Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. To copy otherwise, to republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. UIST ’11, October 16–19, 2011, Santa Barbara, CA, USA. Copyright © 2011 ACM 978-1-4503-0716-1/11/10... $10.00. INTRODUCTION Ubicomp ecologies are now common, where people’s access to digital information increasingly involves near-simultaneous interaction with multiple nearby digital devices of varying size, e.g., personal mobile phones, tablet and desktop computers, information appliances, and large interactive surfaces (Figure 1). This is why a major theme in ubiquitous computing is to explore novel forms of interaction not just between a person and a device, but also between a person and their set of devices [32]. Proxemic interaction is one strategy to mediate people’s interaction in room-sized ubicomp ecologies [2,9]. It is inspired by Hall’s Proxemic theory [11] about people’s understanding and use of interpersonal distances to mediate their interactions with others. In proxemic interaction, the belief is that we can design systems that will let people exploit a similar understanding of their proxemic relations with their nearby digital devices, thus facilitating more seamless and natural interactions. A handful of researchers have already explored proxemic-aware interactive systems. These range from spatially aware mobile devices [17], office whiteboards [15], public art installations [28], home media players [2], to large public ambient displays [31]. All developed novel interaction techniques as a function of proxemic relationships between people and devices. Building proxemic-aware systems, however, is difficult. Even if sensing hardware is available, translating low-level sensing information into proxemic information is hard (e.g., calibration, managing noise, calculations such as 3D math). This introduces a high threshold for those wishing to develop proxemic interaction systems. As a result, most do not bother. Of the few that do, they spend most of their time with low-level implementation details to actually access and process proxemic information vs. refining the interaction concepts and techniques of interest. To alleviate this problem, we built the Proximity Toolkit. Our goal was to facilitate rapid exploration of proxemic interaction techniques. To meet this goal, the Proximity Toolkit transforms raw tracking data gathered from various hardware sensors (e.g., infra-red motion capturing systems, depth sensing cameras) into rich high-level proxemic information accessible via an event-driven object-oriented API. The toolkit includes a visual monitoring tool that displays the physical environment as a live 3D scene and shows the proxemic relationships between entities within that scene. It also provides other tools: one to record events generated by entities for later playback during testing; another to quickly calibrate hardware and software. Thus our work offers three contributions: 1. The design of a toolkit architecture, which fundamentally simplifies access to proxemic information. 2. Interpretation and representations of higher-level proxemic concepts (e.g., relationships, fixed/semi-fixed features) from low-level information. 3. The design of complementary visual tools that allow developers to explore proxemic relationships between entities in space without coding. The remainder of the paper is structured as follows: we recap the concepts of proxemic interaction and derive challenges for developers. We then introduce the design of our toolkit; we include a running example, which we use to illustrate all steps involved in prototyping a proxemic interaction system. Subsequently, we introduce our visual monitor and other tools, and explain the toolkit’s API. Next, we discuss the flexible toolkit architecture and implementation. This is followed by an overview of applications built by others using our toolkit. Finally, we discuss related toolkit work in HCI. BACKGROUND: PROXEMIC INTERACTION Proxemics – as introduced by anthropologist Edward Hall in 1966 [11] – is a theory about people’s understanding and use of interpersonal distances to mediate their interactions with other people. Hall’s theory correlates people’s physical distance to social distance. He noticed zones that suggest certain types of interaction: from intimate (6-18”), to private (1.5-4”), social (4-12”), and public (12-25”). The theory further describes how the spatial layout of rooms and immovable objects (fixed features) and movable objects such as chairs (semi-fixed features) influence people’s perception and use of personal space when they interact [11]. Research in the field of proxemic interaction [2,9,31] introduces concepts of how to apply this theory to ubicomp interaction within a small area such as a room. In particular, such ubicomp ecologies mediate interaction by exploiting fine-grained proxemic relationships between people, objects, and digital devices. The design intent is to leverage people’s natural understanding of their proxemic relationships to manage the entities that surround them. Proxemic theories suggest that a variety of physical, social, and cultural factors influence and regulate interpersonal interaction. Not all can be (or needs to be) directly applied to a proxemic ubicomp ecology. Thus the question is: what information is critical for ubicomp proxemics? Greenberg et al. [9] identified and operationalized five essential dimensions as a first-order approximation of key proxemic measures that should be considered in ubicomp. 1. Orientation: the relative angles between entities; such as if two people are facing towards one another. 2. Distance: the distance between people, objects, and digital devices; such as the distance between a person and a large interactive wall display. 3. Motion: changes of distance and orientation over time; such as a person approaching a large digital surface to interact with it directly. 4. Identity: knowledge about the identity of a person, or a particular device. 5. Location: the setup of environmental features; such as the fixed-feature location of walls and doors, and the semi-fixed features including movable furniture. Previous researchers have used a subset of these five dimensions to build proxemic-aware interfaces that react more naturally and seamlessly to people’s expectations of proxemics. Hello Wall [29] introduced the notion of ‘distance-dependent semantics’, where the distance of a person to the display defined the possible interactions and the information shown on the display. Similarly, Vogel’s public ambient display [31] relates people’s presence in four discrete zones around the display to how they can interact with the digital content. Snibbe [28] investigated people’s use of proxemics in the Boundary Functions public interactive art installation, where they also noticed cultural differences in people’s implicit use of proxemics (similar to Hall’s observations). Ju [15] explored transitions between implicit and explicit interaction with a proxemic-aware office whiteboard: interaction from afar is public and implicit, but becomes more explicit and private when closer. Ballendat et al. [2] developed a variety of proxemic-aware interaction techniques, illustrated through the example of a home media player application. Their system exploits almost all of the 5 dimensions: it activates when the first person enters, reveals more content when approaching and looking at the screen, switches to full screen view when a person sits down, and pauses the video when the person is distracted (e.g., receiving a phone call). If a second person enters, the way that the information displays is altered to account for two viewers in the room [2]. This previous research in proxemic interaction opened up a promising direction of how to mediate people’s interaction with ubicomp technology based on proxemic relationships. The caveat is that they are really just starting points of how we can integrate proxemic measures into interaction design. Further explorative research – including the development and evaluation of actual proxemic-aware systems – will help to refine our understanding of how proxemic theories apply to ubicomp. Building proxemic-aware systems such as the ones described previously is difficult and tedious. This is mostly due to the serious technical challenges that developers face when integrating proxemic information into their application designs. Several challenges are listed below. 1. **Exploring and observing proxemic measures between entities in the ecology.** Developers need to do this to decide which measures are important in their scenario. 2. **Accessing proxemic measurements from within software that is developed to control the ubicomp system.** Developers currently do this through very low-level programming against a particular tracking technology, requiring complex 3D transformations and calculations, and often resulting in brittleness. 3. **Support for proxemic concepts** is created from scratch by developers, e.g., when considering distance of spatial zones or the properties of fixed and semi-fixed features (e.g., the spatial arrangement) in applications. 4. **Debugging and testing** of such systems is difficult due to a lack of sensing and/or matching monitoring tools. ### THE PROXIMITY TOOLKIT The Proximity Toolkit directly addresses these challenges. It facilitates programmers’ access to proxemic information between people, objects and devices in a small ubicomp environment, such as the room shown in Figure 3 and visualized in Figure 2. It contains four main components. - **Proximity Toolkit server** is the central component in the distributed client-server architecture, allowing multiple client devices to access the captured proxemic information. - **Tracking plug-in modules** connect different tracking / sensing systems with the toolkit and stream raw input data of tracked entities to the server. - **Visual monitoring tool** visualizes tracked entities and their proxemic relationships. - **Application programming interface (API)** is an event-driven programming library used to easily access all the available proxemic information from within developed ubicomp applications. We explain each of these components in more detail below, including how each lowers the threshold for rapidly prototyping proxemic-aware systems. Also see the video figure. However, we first introduce a scenario of a developer creating a proxemic interaction system (also in video figure). Through this scenario, we will illustrate how the Proximity Toolkit is used in a real programming task to create a prototype of a proxemic-aware ubicomp application. The example is deliberately trivial, as we see it akin to a **Hello World** illustrating basic programming of proxemic interaction. Still, it shares many similarities with more comprehensive systems built for explorations in earlier research, e.g., [2,15,31]. **Scenario.** Developer Steve is prototyping an interactive announcement board for the lounge of his company. In particular, Steve envisions a system where employees passing by the display are: attracted to important announcements as large visuals from afar; see and read more content as they move closer; and post their own announcements. (typed into their mobile phones) by touching the phone against the screen. To create a seamless experience for interacting with the large ambient display, Steve plans to recognize nearby people and their mobile devices. Steve builds his prototype to match the room shown in Figure 3. **Proximity Toolkit Server** The Proximity Toolkit Server is the central component managing proxemic information. It maintains a hierarchical data model of all fixed features (e.g., walls, entranceways), semi-fixed features (e.g., furniture, large displays), and mobile entities (e.g., people or portable devices). This model contains basic information including identification, position in 3D coordinates, and orientation. The server and toolkit API then perform all necessary 3D calculations on this data required for modeling information about higher-level proxemic relationships between entities. The server is designed to obtain raw data from various attached tracking systems. For flexibility, each of the tracking systems is connected through a separate plugin module loaded during the server’s start-up. These plugins access the captured raw input data and transfer it to the server’s data model. The current version of our toolkit contains two plugins: the marker-based VICON motion capturing system, which allows for sub-millimeter tracking accuracy [www.vicon.com], and the KINECT sensor, which allows tracking of skeletal bodies [www.kinect.com]. In a later section we discuss the implementation, integration, and combination of these tracking technologies, and how to setup the server to match the environment. Importantly, the server’s unified data model is the basis for a distributed Model-View-Controller architecture [3], which in turn is used by the toolkit client API, the monitoring tool, and to calculate proxemic relationships between entities. **Table 1.** Accessible proxemic information in the Proximity Toolkit: individual entities, relationships between two entities, and pointing relationships. This information is accessible through the toolkit API and the toolkit monitor visualization. **Scenario.** Developer Steve begins by starting the server. The server automatically loads all present tracking plugins. Based on the information gathered from these plugins, it populates and updates the unified data model in real-time. By default, our toolkit already includes a large pre-configured set of tracked entities with attached markers (such as hats, gloves, portable devices) and definitions of fixed and semi-fixed features (large interactive surface, surrounding furniture). To add a new tracked object, Steve attaches markers to it and registers the marker configuration as a new tracked entity. This process takes minutes. **Visual Monitoring Tool: Tracked Entities** The visual monitoring tool helps developers to see and understand what entities are being tracked and how the data model represents their individual properties. Figure 2 is a screenshot of this tool: the visualized entities in (b-f) correspond to real-world entities captured in Figure 3 (b'-f'). Specifically, the visual monitoring tool connects to the server (through TCP) and presents a 3D visualization of the data model (Figure 2 centre). This view is updated in real-time and always shows: - the approximate volume of the tracked space as a rectangular outline box (Fig. 2a) - position and orientation of people (Fig. 2bc) - portable digital devices, such as a tablet pc (Fig. 2d) - digital surfaces, such as the large wall display (Fig. 2e) - fixed and semi-fixed features, such as a table, couch (Fig. 2f), and entranceway (Fig. 2g). The left side of the monitoring window shows a list of the activated input tracking plugins (Figure 2h) and another list with an overview of all currently tracked entities (Figure 2i). Clicking on any of the items in this list opens a hierar- chical list of properties showing the item’s current status (e.g., its location, or orientation). When Steve selects any of these properties, the monitoring window shows the corresponding value (e.g., the current position as a 3D Vector, or the velocity; Fig 2k). Part A of Table 1 shows an overview of the most important available properties. Scenario. Before Steve starts to program, he explores all available proxemic information through the visual monitoring tool. He inspects the currently tracked entities (Figure 2 left, also displayed in the center), as well as which entity properties are available for him to use. Steve finds this visual overview particularly important to his initial design, as he is still investigating the possible mappings of proxemic relationships to system behaviour. In later stages, he will also use this monitoring tool to test and debug his program. Visual Monitoring Tool: Relationships Another major feature of the visual monitoring tool is to let people set and observe particular proxemic relationships between entities, where developers will use these relationships to define particular proxemic interaction behaviours. Specifically, the Relation Visualizer panel (Fig. 2, l-m) allows a developer to select a type of relationship between entities, and then to observe the values of all related properties. The complete list of proxemic relationships that are available to observe are summarized in part B/C of Table 1. Scenario. Steve wants to observe a relationship between Person1 (representing the first person entering the space) and the Smartboard display. Steve drags the two entries from the list of tracked entities (Figure 2i) to the top of the Relation Visualizer panel (Fig. 2l). Next, Steve selects one of the following relationship categories from a drop down menu. - Orientation (e.g., angles between entities) - Location (e.g., changes in distance between the person and the smartboard) - Direction (e.g., if the front of the person’s body faces towards the screen) - Movement (e.g., acceleration or velocity) - Pointing (e.g., the display intersection point of the right arm pointer of the person) - Collision (e.g., if the volumes of two tracked entities are so close that they collide) Steve can now observe how those entities relate to each other. The panel in Fig. 2m shows the numeric values of any properties belonging to this category. The categories plus the properties within them operationalize the five essential elements of proximity mentioned previously. With his public announcement application in mind, Steve is interested in knowing when a person is in close distance to the display. He selects the Location category, and looks at the values of the Distance property, which – in this case – measures the distance of the person’s body to the board (Fig. 2m). Next, he wants to know when the person is facing towards the screen. He selects the Direction category from the menu, and immediately sees the related proxemic properties with their current values and their graphical appearance in the visualization. He is particularly interested in the ATowardsB property, which is true if the person [A] is facing towards the smartboard [B]. He decides to use the information about direction and distance to adapt the content shown on the announcement board. Steve continues exploring other proxemic relationship categories and makes note of the types of relationships that he will integrate into his application. As he selects these other categories (Fig. 2l), the 3D visual representation changes accordingly. Figure 4 illustrates three other visualizations of proxemic relationships that Steve explored: the distance between the person and the display (Fig. 4a), the forward pointer of the left arm and its intersection point with the smartboard (Fig. 4b), and the collision volumes (Fig. 4c). SIMPLIFIED API ACCESS TO PROXEMIC INFORMATION We now take a closer look at the development API, offered via an object-oriented C# .NET development library. We designed it to be fairly easy to learn and use (1) by taking care of and hiding low-level infrastructure details and (2) by using a conventional object-oriented and event-driven programming pattern. Essentially, the API lets a developer programmatically access the proxemic data previously observed in the monitoring tool. We explain how this works by continuing our scenario. Scenario. Steve adds the Proximity Toolkit API library to his own PC-based software project. The only criterion is that his PC needs network access to the proximity server. Steve begins by initializing his software. To set up his software to use the server, he adds three lines of code (lines 1-3 in Figure 5). First, he creates a new client connection object, which provides a high-level framework for monitoring the interaction of tracked presences, such as people and objects. The ProximitySpace object maintains a list of all available tracked entities, and is used to create instances of entities or for initializing event handlers to monitor relationships. Next, Steve initializes three of the entities he is interested in (lines 4-6): the person representing the first person entering the space, the smartboard, and a tablet (PresenceBase is a special object that represents individual tracked or static objects). The following describes how Steve then monitors the relationships between these entities. We go through each of the five proxemic dimensions introduced earlier (albeit in a slightly different order), explaining how Steve writes his application to monitor changes in each of these dimensions, and how he uses that information to mediate interaction with his interactive announcement board. 1. Orientation Monitoring orientation changes allows (1) accessing the exact angle of orientation between two entities and/or (2) determining whether two entities are facing each other. Steve is mostly interested in the relationship between a person and the smartboard display. He adds line 7, which creates a relationship between these two as indicated by their parameters. The system is now tracking both entities relative to each other. Steve is also interested in knowing when the orientation and location between these two changes. For orientation, he initializes an event handler to receive updates of the Direction relationship between the person and the smartboard (line 8). The `OnDirectionUpdated` method is invoked when the system recognizes any changes in orientation between the person and the smartboard (line 10). While Steve could access each entity’s precise orientation values (e.g., angles of orientation), he is only really interested in knowing where a person is facing towards the smartboard. Consequently, he writes the event handler callback method (lines 10-12) to access the `ATowardsB` property in the event arguments: it is true if the person is facing the smartboard (line 11). Entries R2-R5 and P1-P3 in Table 1 give an overview of further orientation relationships that can be monitored. As well, the programmer can access the absolute orientation of an individual entity at any time (see entries I6 – I7 in Table 1). For example, the following property returns the current yaw angle of the tablet: `tablet.Orientation.Yaw;` 2. Distance, including Location, Pointing and Touching Similarly, Steve can monitor changes of distance between entities. We illustrate how Steve can receive updates about distance changes by adding another event callback for `OnLocationUpdated` events (line 9). This callback method (lines 13-15) is invoked whenever the location of at least one of the two entities changes. In line 14 Steve accesses the current distance between the person and the smartboard, and uses this distance value to make the visual content on the announcement board vary as a function of the distance between the person and the display. The closer the person, the more content is revealed. Other available properties relate to distance. First, the actual location property of each entity, i.e., its position within the space, is accessible at any time. For example Steve can access the current coordinates of the person by accessing `this.person.Location`. Second, pointing relationships monitor orientation and distance simultaneously. Pointing is similar to ray-casting. Each entity can have one or multiple pointers. Each pointer has a pointing direction, and the callback returns the intersection of that direction with the other entity. It also returns the length of the pointing ray between entities, which may not be exactly the same as distance. To illustrate, Steve tracks not only the close distance of a tablet computer to the smartboard, but where that tablet raycasts onto the smartboard. He initializes a second `RelationPair` between the tablet and the smartboard (line 16). He subscribes for `OnPointingUpdated` events that are triggered whenever any of the pointers of the tablet changes relative to the board (line 17). In the event callback method (lines 18 to 22) Steve first checks if the tablet’s forward pointer faces the display (`PointsTowards`) and if the ray length between tablet and board is smaller than 50 cm (line 19). If this is the case, he shows an icon on the ray’s intersection point (line 20) on the smartboard to let the person know they can touch the surface to initiate a transfer. Third, Steve checks if the tablet is touching the surface - (`IsTouching`, line 21) – a distance of ~0. If so, he initiates transfer of the content on the tablet to the large display. By using the intersection point of the tablet with the screen Steve can show the transferred content at the exact position where the tablet touches the board. ```csharp 01 ProximityClientConnection client = new ProximityClientConnection(); 02 client.Start("192.168.0.11", 888); 03 ProximitySpace space = client.GetSpace(); 04 PresenceBase person = space.GetPresence("Person1"); 05 PresenceBase smartboard = space.GetDisplay("SmartBoard"); 06 PresenceBase tablet = space.GetDisplay("Tablet"); 07 RelationPair relation = space.GetRelationPair(person, smartboard); 08 relation.OnDirectionUpdated += new DirectionRelationHandler(OnDirectionUpdated); 09 relation.OnLocationUpdated += new LocationRelationHandler(OnLocationUpdated); 10 void OnDirectionUpdated(ProximitySpace space, DirectionEventArgs args) { 11 if (args.AtowardsB) { [... person is facing the display, show content ... ] } else { [... hide ... ] } 12 } 13 void OnLocationUpdated(ProximitySpace space, LocationEventArgs args) { double distance = args.Distance; [... change visual content as a function of distance ... ] 14 } 15 } 16 RelationPair relationTablet = space.GetRelationPair(tablet, smartboard); 17 relationTablet.OnPointingUpdated += new PointingRelationHandler(OnPointingUpdated); 18 void OnPointingUpdated(ProximitySpace space, PointingEventArgs args) { 19 if (args["forward"].PointsTowards & args["forward"].Distance < 500.0) { 20 Point intersection = args["forward"].DisplayPoint; 21 if (args["forward"].IsTouching) { 22 [... transfer content from the tablet to the large display ... ] 23 } 24 } 25 } ``` Figure 5. Partial source code for the proxemic-aware announcement board application. 3. Identity The toolkit allows access to the identity information of all tracked entities. The Name property provides the identifier string of each entity, and IsVisible is true if the entity is currently tracked by the system. A developer can subscribe to events notifying about any new tracked entities that enter the ubicomp space through the space.OnPresenceFound event. In the associated event callback method, the event arguments give information about the type and name of the detected entity. For example, Steve could have his system track and greet a previously unseen person with a splash screen on first appearance, and dynamically initialize any necessary event callbacks relating that person to other entities in a scene. 4. Motion Motion events describe the changes of distance and orientation over time, e.g., to receive updates of changes in acceleration and velocity of any entity. For example, Steve can have his application ignore people moving quickly by the display, as he thinks they may be annoyed by any attempts to attract their attention. To receive such velocity updates, Steve would add an event handler (similar to lines 8 and 9) through OnMotionUpdated and then simply access the value of the args.velocity property. Based on that value, he would activate the display only if the velocity was less than a certain threshold. Of course, Steve could have determined a reasonable threshold value by observing the velocity value of a person rushing by the display in the visual monitoring tool. 5. Location: Setup of Environment Using location, the toolkit lets one track the relationships of people and devices to the semi-fixed and fixed features in the physical environment. For example, the model may contain the fixed-feature position of the entrance-way to a room, allowing one to know if someone has crossed that threshold and entered the room. It may also contain the location of semi-fixed features, such as the chairs and table seen in Figure 3. Monitoring event handlers for fixed and semi-fixed features can be initialized similarly to the ones we defined earlier. Steve sets up several fixed feature entities – the smartboard and the entrance-way – through several initial configuration steps. This only has to be done once. Using a physical pointer (the stick in Figure 6a), he defines each entity’s volume by physically outlining them in space. Under the covers, the toolkit tracks the 3D tip location of this stick and builds a 3D model of that entity. Each location point of the model is confirmed by pressing a button (e.g., of a wirelessly connected mouse). Figure 6 illustrates how Steve defines the smartboard. After placing the pointer in the four corners of the display plane (Fig. 6a), the coordinates appear in the visualization (6b), and a control panel allows fine adjustments. He saves this to the Proximity Toolkit server as a model. Similarly, Steve defines the entrance-way by outlining the door (Fig. 2g), and the couch by outlining its shape (Fig. 2f). Steve can now monitor proxemic relationships between all moving entities and these new defined features. For example, he can create an event handler to receive notifications when a person passes through the entrance-way (by using the OnCollisionUpdated event) and when a person sits on the couch (using the Distance property of the OnLocationUpdated). Semi-fixed features differ. While they are part of the environment, they are also movable. As with fixed features, a developer would model a shape by outlining it with the stick. Unlike fixed features, he would also add markers to that entity. The toolkit tracks those markers, and repositions the entity accordingly. For example, Steve could have modeled a chair, tracked where it is in the room, and adjusted the presentation if a person was sitting on it. We should also mention that we believe location should also include further contextual information about this particular environment, e.g., the meaning of that place. Such contextual information is not yet included in the toolkit, but could be easily added as metadata. Scenario – next steps. Our walkthrough example illustrated the easy-to-use mechanisms of integrating proxemic measurements into a ubicomp system. While simple, this starting point allows Steve to further extend the system functionality exploring proxemic interactions. Examples include: (1) subscribing to events of a second person to let the system react to both persons’ movement to the display. (2) Monitoring additional tablet computers, and enabling content-sharing between them as a function of the device’s distance. Overall, the toolkit minimizes the effort necessary for such extensions, and allows rapid exploration and alteration of interaction techniques. Additional Tools Facilitating Prototyping Process The toolkit is more than an API, as it offers additional tools to lower the threshold for developing proxemic-aware systems. The already-discussed visual monitoring tool is one of these. Several others are described below. Recording and playback of proxemic sequences. To test applications, developers would need actors to perform the proxemic movements between entities every time. This is problematic for many reasons: it is tedious; the sensing equipment may not be available; and it is difficult to repeat particular test sequences. To alleviate this, the toolkit provides a rec- Method causes the VICON plugin to initialize the underlying NEXUS software [30], and the KINECT plugin to initialize the OPENNI [24] software. Once initialized, plugins receive raw data of tracked people, objects, and/or devices in 3D space. The `onUpdate` method of each plugin module is responsible to stream raw tracking data into the toolkit. **Diverse tracking capabilities.** In order to allow the integration of hardware with different tracking capabilities, the plugins specify the kinds of proxemic information they support. For example, a tracking system might gather information about the position of an entity, but *not* its orientation. Following the *decorator pattern* [7], each plugin can specify exactly what kind of input data a particular tracking hardware provides. The decorator pattern describes a mechanism to extend the functionality of objects at run-time. In our case, the plugin creates decorator objects for each proxemic dimension of input data it supports and calls the update method on these decorators. For example, the `LocationDecorator` updates location of an entity and the `OrientationDecorator` its orientation (plugins can add custom decorators for any proxemic information not yet supported by available decorators). During each update cycle (i.e., when `onUpdate` is called), the decorator objects update the proxemic information in the server’s unified data model as proxemic information of each entity. No high-level calculations on raw input data are required for the plugin implementation, as these are performed by the proximity server or API. The available dimensions of input data for each tracked entity are directly visible in the monitoring tool: a list view and 3D view give direct feedback about the available proxemic dimensions. These dimensions can be also checked from the client API by using the `isVisible` properties for each available input dimension. **Distributed data model.** The server’s unified data model is a collection of hierarchical key-value pairs representing all currently tracked entities. The keys are structured according to the following pattern: ``` {space}/[presence]/[proxemic-dimension]/[identifier] ``` For example, the following key-value pairs are part of the data model of a tracked person (i.e., location, motion, and orientation): ``` /home/person/locationdecorator/location = [12.4,3.7,8.2] /home/person/motiondecorator/velocity = [0.1,0.6,20.5] /home/person/orientationdecorator/rollangle = -95.5 ``` This data model is implemented through a shared hash table that is accessible through TCP connections [3]. Thus, the data model is accessible from all computers linked in the same network. Usually the underlying data model is hidden from developers (though they can access and modify it if desired). The server and the toolkit API calculate necessary proxemic relationships for the entities present in the data model. To reduce computational overhead, the necessary 3D calculations are done only on demand, i.e., when a client subscribes to events for a particular relationship between two entities. Substitution. Tracking systems/plugins can be substituted, providing that their hardware gathers similar tracking information. For example, instead of using the depth camera for tracking people’s positions and postures, a programmer can use the IR motion capture system instead by attaching IR reflective markers to a person’s body. Due to the separation of tracking hardware and API, a programmer’s access to this proxemic information via the toolkit API remains unchanged, regardless of the underlying tracking mechanism used. Uncertainty. All 3D tracking systems provide input with some kind of uncertainty. As tracking systems differ in precision of tracking data they provide, plugins are required to provide additional information about this uncertainty of tracking information. In particular, two values describe tracking uncertainty in our toolkit. First, the precision value specifies how accurate the system tracks entities (normalized between 0.0 and 1.0). Precision is defined as 1 / [minimum resolution], where the minimum resolution is measured in mm (e.g., minimum resolution is 1mm for the VICON system, and 20mm for KINECT). Thus, the lower the resolution, the higher the precision value is. Second, the confidence value indicates the estimated accuracy of the provided tracking information. It ranges from 0.0-1.0, where 0 is 0% confidence (i.e., lost tracking), and 1 is 100% confidence. In our plugins, the VICON motion capturing system provides estimated accuracy information for all tracked markers, and this value is mapped directly to our confidence value. In contrast, the confidence value of a person tracked by the OPENNI depth cameras is calculated by dividing the recognized parts of the body (e.g., arms, legs) to the total number of possible parts to recognize (i.e., the confidence is 1.0 if the full body of a person is tracked). These confidence and precision values are applied to each individually tracked entity. Furthermore, the precision value can differ depending on where in the 3D space an entity is tracked (e.g., precision is higher when a person stands closer to the depth sensing camera). A developer can monitor the quality of input data with the visual monitor tool. A table view lists confidence and precision values, and the 3D view gives direct feedback of the precision (or absence) of tracking. Similarly, the API exposes the confidence and precision values of each entity. It also includes the isVisible (false if lost tracking) and lastUpdated (timestamp of the last update) properties. Combination. In cases where different plugins provide complementary tracking information of a single entity, the information can be combined in the proximity server’s data model. For example, the KINECT and VICON systems could both track a single person simultaneously: the KINECT system provides information about the person’s body position in 3D space, and the VICON system tracks a glove the person is wearing in order to retrieve fine-grained information of the person’s finger movements. Both plugins then update the entity’s data model in the server with their tracked information. If two systems provide overlapping/conflicting tracking data (e.g., two systems provide information about an entity’s location), the information will be merged in the server’s data model. To do so, the server calculates a weighted average (taking the confidence and precision values) of all values received in a certain time frame (i.e., one update cycle) and updates the proxemic data model of that entity. This means, that the higher the confidence and precision value of a given entry, the more it affects the final merged value of that entity. Alternatively, other algorithms for tracking data fusion (e.g., [33]) could be seamlessly implemented on the server level (thus not requiring any changes to the plugins or the API). We could also extend the toolkit’s uncertainty information via Schwarz et al.’s [27] framework for handling ambiguous input, where this could track ambiguous information simultaneously and delay event triggers. Availability. Our toolkit including software and documentation facilitating development of custom plugins (or other possible extensions to the toolkit) are available as open source on the GroupLab Proximity Toolkit website [10]. APPLICATIONS OF PROXEMIC INTERACTION The Proximity Toolkit allowed our colleagues – most who were not involved in the toolkit design and coding – to rapidly design a large variety of proxemic-aware ubicomp systems. The toolkit was invaluable. Instead of struggling with the underlying low-level implementation details, colleagues and students focused on the design of novel interaction techniques and applications that considered people’s use of space. This includes comprehensive systems such as the proxemic media player by Ballendat et al. [2], and other applications presented in Greenberg et al. [9]. <table> <thead> <tr> <th>Application</th> <th>Monitored relationships</th> <th>LOC</th> <th>LOC proximity</th> </tr> </thead> <tbody> <tr> <td>Attention demanding advertisements</td> <td>2 people, 1 large surface, 1 tablet</td> <td>284</td> <td>32</td> </tr> <tr> <td>Spatial music experience</td> <td>2 people, 4 objects</td> <td>181</td> <td>64</td> </tr> <tr> <td>Proxemic-aware pong game</td> <td>2 people, 1 large surface</td> <td>209</td> <td>47</td> </tr> <tr> <td>Proxemic presenter</td> <td>1 person, 1 large surface</td> <td>92</td> <td>18</td> </tr> <tr> <td>ProxemicCanvas workspaces</td> <td>2 people, 2 notebook computer</td> <td>393</td> <td>58</td> </tr> </tbody> </table> Table 2. Overview of built proxemic-aware applications, the proxemic relationships they monitor, the total lines of code (LOC), and the code for accessing proxemic information (LOC proximity). LOC are approximate. To stress the ease of learning and developing with our toolkit, we summarize a few projects built by students in a graduate ubicomp class in Fall 2010. They received a one-hour tutorial presentation and a demonstration of two programming examples. The students’ assignment was simply to create a proxemic interface of their choosing, where they had to demonstrate it in the next class. Thus all examples (listed in Table 2 and briefly explained below) were built and demonstrated by the students within a week of the tutorial. Attention-Demanding Advertisements (Miaoen Wang) explores how future advertisement displays might try to grab and keep a person’s attention. A digital advertisement board: Spatial music experience (Matthew Dunlap) is an interactive music installation. The kinds of sounds generated and their volume is determined by the proxemic relationships of people and physical objects in the space. Generated sounds react fluently as people move and perform gestures in the space, and when they grab and move physical objects. Proxemic-aware Pong (Till Ballendat) is inspired by Atari’s Pong game. A person controls the paddle for bouncing the ball by physically moving left and right in front of a large screen. The game recognizes when a second person enters, and creates a second paddle for multiplayer play. To increase the game play difficulty over time, it increases the required physical distance to move the paddles. When players move close to the screen, they can adjust the paddle size through direct touch. When both sit down on the couch, the game pauses. Proxemic Presenter (Miaosen Wang) is a presentation controller that reacts to the presenter’s position relative to a large display [9]. Presentation slides are displayed full screen on the large display. When the presenter stands at the side and turns his head towards the display, a small panel appears next to him, showing speaker notes, a timer, and buttons to navigate the slides. If he switches sides, the panel follows him. When facing back to the audience, the panel disappears immediately. When he moves directly in front of facing towards the display, the system shows an overview of all slides as touch-selectable thumbnails. When he turns back to the audience, the presentation reappears. ProxemiCanvas (Xiang Anthony Chen) is an interactive drawing application in which drawing canvases displayed on people’s portable computers gradually merge as a function of proxemic relationships between people and devices. For instance, from close to far distance, this ranges from: (a) merged workspaces when very close, (b) awareness of other people’s work when sitting nearby, to no shared information when turning away (e.g., when people are sitting back to back). What is important in these examples is how the Proximity Toolkit lowered the threshold for these students to begin their exploration of proxemics in the ubicomp context (Table 2). Easy access to proxemic information through the toolkit and API allowed them to rapidly prototype alternative system designs, all leading towards exploring the design space of future proxemic-aware ubicomp systems. **RELATED WORK** Our research is inspired by earlier toolkits enabling the rapid prototyping of ubicomp interactions. We sample and review related work in three areas: toolkit support in HCI, ubicomp development architectures, and 3D spatial tracking. **Post-GUI Toolkits** Several development toolkits facilitate the prototyping of physical and tangible user interfaces that bridge the connection between the digital and physical world [14]. Many of these toolkits focus on a low threshold, but simultaneously aim for maintain a relatively high ceiling [23]. For example, Phidgets [8] and the iStuff toolkit [1] provide physical building blocks (buttons, sensors) that programmers can easily address from within their software. Shared Phidgets took this concept further by simplifying the prototyping of distributed (i.e. remote located) physical user interfaces [21]. Hartmann’s visual authoring environment in dTools [12] brought similar concepts to interaction designers. Other toolkits simplified the integration of computer vision techniques into novel user interfaces, such as Klemmer’s PapierMache [16]. **Ubicomp Development Architectures** On a somewhat higher level of abstraction, Dey introduced an architecture to compose context-aware ubicomp systems with the Context Toolkit [4]. They provide context widgets as encapsulated building blocks, working in conjunction with generators, interpreters, or aggregators. The context toolkit allows the composition of new applications through a concatenation of the basic components – and thus facilitates scaffolding approaches. Matthews applied similar concepts to the programming of peripheral ambient displays [22]. Other systems facilitate access to location information of devices in ubicomp environments. For example, Hightower’s Location Stack [13] fuses the input data from various sources to a coherent location data model. Krumm and Hinckley’s NearMe wireless proximity server [18] derives the position of devices from their 802.11 network connections (without requiring calibration), and thus informs devices about any other devices nearby. Li’s Topiary [19] introduced prototyping tools for location-enhanced applications. **3D Spatial Tracking** Few development toolkits support the exploration of novel interfaces considering the presence, movements, and orientation of people, objects, and devices in 3D space. For example, some toolkits allow development of augmented reality (AR) applications. To illustrate, Feiner’s prototyping system allows exploration of novel mobile augmented reality experiences (e.g., with a head mounted 3D display, or a mobile tablet like device) [6]. This was developed further in Mac- Intyre’s DART [20], Open Tracker [25], and Sandor’s prototyping environment [26] for handheld-based AR applications. These toolkits mostly focus on supporting augmented reality applications running on mobile devices, and not on ubicomp ecologies in small rooms. Some commercial systems track 3D data of objects. For example, the VICON Nexus software gives access to 3D spatial information of tracked objects. This information, however, only includes low level position data, which developers need to process manually in order to gain insights into proxemic relationships. Our Proximity Toolkit builds on this prior work. Like post-GUI toolkits, it bridges the connection between the virtual and real world, but in this case by tracking proxemic information. Similarly, it extends ubicomp architectures and 3D spatial tracking by capturing and providing fine-grained information about 3D proxemic relationships in small ubicomp spaces (i.e., not only location, but also orientation, pointing, identity, etc.). Like the best of these, it supplies an API that, in our case, makes the five essential proxemic dimensions [9] easily accessible to developers. Like the more advanced tools, it also provide additional development tools, such as a monitoring tool for visualizing proxemic relationships, a record/playback tool to simplify testing; templates, documentation, examples, and so on. CONCLUSION The Proximity Toolkit enables rapid prototyping and exploration of novel interfaces that incorporate the notion of proxemic relationships. Through hiding most of the underlying access to tracking hardware and complex 3D calculations, our toolkit lets developers concentrate on the actual design and exploration of novel proxemic interaction. We invite other researchers to use it. The Proximity Toolkit is available as open source [10]. ACKNOWLEDGMENTS This research is partially funded by the iCORE/NSERC/SMART Chair in Interactive Technologies, Alberta Innovates Technology Futures, NSERC, and SMART Technologies Inc. REFERENCES
{"Source-Url": "https://curis.ku.dk/ws/files/44312111/marquardt.UIST_2011.proximity_toolkit.pdf", "len_cl100k_base": 10370, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 37762, "total-output-tokens": 12518, "length": "2e13", "weborganizer": {"__label__adult": 0.0004649162292480469, "__label__art_design": 0.00235748291015625, "__label__crime_law": 0.0003809928894042969, "__label__education_jobs": 0.00267791748046875, "__label__entertainment": 0.0001735687255859375, "__label__fashion_beauty": 0.0002803802490234375, "__label__finance_business": 0.0003025531768798828, "__label__food_dining": 0.0003840923309326172, "__label__games": 0.0010528564453125, "__label__hardware": 0.0031490325927734375, "__label__health": 0.0007114410400390625, "__label__history": 0.00061798095703125, "__label__home_hobbies": 0.00017452239990234375, "__label__industrial": 0.0005364418029785156, "__label__literature": 0.0004968643188476562, "__label__politics": 0.00022745132446289065, "__label__religion": 0.0005688667297363281, "__label__science_tech": 0.2386474609375, "__label__social_life": 0.00017154216766357422, "__label__software": 0.0214385986328125, "__label__software_dev": 0.7236328125, "__label__sports_fitness": 0.0003075599670410156, "__label__transportation": 0.0007653236389160156, "__label__travel": 0.0002532005310058594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56027, 0.03781]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56027, 0.61556]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56027, 0.8817]], "google_gemma-3-12b-it_contains_pii": [[0, 432, false], [432, 5355, null], [5355, 11417, null], [11417, 14502, null], [14502, 18384, null], [18384, 23716, null], [23716, 29691, null], [29691, 35094, null], [35094, 38193, null], [38193, 44571, null], [44571, 49720, null], [49720, 56027, null]], "google_gemma-3-12b-it_is_public_document": [[0, 432, true], [432, 5355, null], [5355, 11417, null], [11417, 14502, null], [14502, 18384, null], [18384, 23716, null], [23716, 29691, null], [29691, 35094, null], [35094, 38193, null], [38193, 44571, null], [44571, 49720, null], [49720, 56027, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56027, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56027, null]], "pdf_page_numbers": [[0, 432, 1], [432, 5355, 2], [5355, 11417, 3], [11417, 14502, 4], [14502, 18384, 5], [18384, 23716, 6], [23716, 29691, 7], [29691, 35094, 8], [35094, 38193, 9], [38193, 44571, 10], [44571, 49720, 11], [49720, 56027, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56027, 0.0307]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
b908e51f0a11f620f0befd6031304676ffc1cc8d
An Empirical Characterization of Software Bugs in Open-Source Cyber-Physical Systems Fiorella Zampetti\textsuperscript{a}, Ritu Kapur\textsuperscript{c}, Massimiliano Di Penta\textsuperscript{a}, Sebastiano Panichella\textsuperscript{b} \textsuperscript{a}Department of Engineering, University of Sannio, Italy \textsuperscript{b}Zurich University of Applied Science, Switzerland \textsuperscript{c}C-DAC Centre in North East (CINE), India Abstract Background: Cyber-Physical Systems (CPSs) are systems in which software and hardware components interact with each other. Understanding the specific nature and root cause of CPS bugs would help to design better verification and validation (V\&V) techniques for these systems such as domain-specific mutants. Aim: We look at CPS bugs from an open-source perspective, trying to understand what kinds of bugs occur in a set of open-source CPSs belonging to different domains. Method: We analyze 1,151 issues from 14 projects related to drones, automotive, robotics, and Arduino. We apply a hybrid card-sorting procedure to create a taxonomy of CPS bugs, by extending a previously proposed taxonomy specific to the automotive domain. Results: We provide a taxonomy featuring 22 root causes, grouped into eight high-level categories. Our qualitative and quantitative analyses suggest that 33.4\% of the analyzed bugs occurring in CPSs are peculiar to those and, consequently, require specific care during verification and validation activities. Conclusion: The taxonomy provides an overview of the root causes related to bugs found in open-source CPSs belonging to different domains. Such root causes are related to different components of a CPS, including hardware, interface, configuration, network, data, and application logic. Keywords: Cyber-Physical Systems, Open-source, Bugs, Defects Taxonomy 1. Introduction Nowadays, software development concerns more and more about software systems interacting with hardware devices such as sensors and actuators. Examples include automotive and avionic systems, as well as, e-health devices [48], and also software governing Internet of Things (IoT) infrastructures for building automation, smart cities, and manufacturing. Such systems are named Cyber-Physical Systems (CPSs). The main distinguishing element of CPSs is that they are systems that collect, analyze, and leverage sensor data from the surrounding environment to control physical actuators at run-time [1, 7]. The interaction of a CPS with hardware devices, as well as with humans and other systems, makes the nature and effect of bugs in CPS environments very specific and non-predictable [57, 61]. One of the most famous software failures in a CPS is the one related to the Ariane 5 [19, 33], caused by improper reuse from its predecessor, i.e., Ariane 4. As a consequence, there is the need to empirically define a CPS-specific bug taxonomy that helps to determine the root causes of different bugs that might occur in a CPS. Such a taxonomy would help in developing effective CPS-specific bug detection tools and techniques. We define **CPS bug** as “a flaw in the hardware (not properly handled by the software), or an incorrect interaction between the software and hardware components leading to a CPS misbehavior.” A CPS bug can manifest as a **CPS failure**, which makes a CPS unable to deliver its required functionality or fulfill certain non-functional properties. CPS-specific bugs can occur in presence of broken sensors [40] or a security attack [58], leading to (unexpected) inputs resulting in a misbehavior of the autonomous system. As an example, in PX4-**AUTOPILOT** the presence of noisy data, i.e., false or unrealistic, coming from airspeed sensors, leads to an unexpected behavior of the drone in-flight, i.e., “the controller scaled the control surface signals up leading to heavy oscillations”\(^1\). A different example has been experienced in the **OPENPILOT** project where, on a specific device (Rav4 Prime) the software does not work as expected while “a CAN bus error occurs”\(^2\). After having tried the software over different devices, developers ended up that the software is not compatible with the Rav4 Prime. The latter highlights that it is possible to see unexpected behavior in the CPS when running the software on unsupported hardware devices. The goal of this paper is to empirically define a taxonomy of bugs occurring in CPSs belonging to different domains to design better verification and validation (V&V) techniques for CPSs. The purpose of devising such bug taxonomy is many-fold. Specifically, it can be useful to (i) better understand the root causes of failures [23], or (ii) better plan code review [39] and testing activities, as well as, (iii) to define domain-specific (testing) mutants. The latter is important since previous research has shown that test mutants do not always represent real faults [31], and therefore domain-specific mutants’ taxonomy may be required for emerging systems such as CPSs. Not only the fault distribution would change, but, also, there could be --- 1. \[https://github.com/PX4/PX4-Autopilot/issues/8980\] 2. \[https://github.com/commaai/openpilot/issues/2103\] Our study stems from the Garcia et al.’s autonomous vehicle (AV) bug taxonomy [23] derived by looking at data of two AV systems, and contribute with a differentiated replication study (i.e., a replication differing both in terms of domain, methodology, and possibly focusing on specific aspects of the original study [13, 49]) as summarized in the following: - **Different purpose:** while the goal of the Garcia et al.’s study is to relate symptoms and root causes, we only focus on the root causes with the aim of supporting future research aimed at deriving specific mutation testing strategies for CPSs. - **Different domains:** while Garcia et al.’s study specifically targets bugs in self-driving car software from two systems, our work involves more projects and spans across different CPS domains. Specifically, we analyze a more heterogeneous set of bugs from 14 different projects including Arduino (e.g., Arduino core, as well as, Internet of Things – IoT, and Infrared Remote libraries for Arduino), drones, robotics, and automotive. - **Number of projects analyzed:** while Garcia et al.’s work targets two open-source projects, we extend upon their work by analyzing 1,151 issues from 14 open-source projects belonging to four different CPS domains. - **Explicitly distinguishing CPS-specific bugs from generic bugs:** while discussing different bug categories, we make the case for, and discuss, bugs that are specific to CPSs from bugs that may occur in any (conventional) software system. The bug categorization has been conducted using a hybrid card-sorting approach [45], i.e., we started from a set of predefined categories used in Garcia et al. [23]’s work. While their work belongs to the automotive domain, we found its domain relatively close to our work (as we also consider automotive projects), and the categories they defined in terms of root causes applicable to our context as well. As a result of the manual categorization, we obtained a taxonomy featuring 22 root causes, grouped into eight high-level categories. The taxonomy enhances and extends the one previously created by Garcia et al. [23] for AV bugs. To the best of our knowledge, this is the first work that proposes a taxonomy aimed at identifying the root causes of the bugs introduced by developers while developing CPSs including hardware, network, interface, data, configuration, algorithm and documentation-related bugs. On the one hand, our results point out that ≈33% of the bugs are CPS-specific. Even if this percentage may appear to be relatively limited, it is not entirely surprising that the majority of bugs occurring in CPS (as in any other traditional software system) are conventional (e.g., programming logic or other) bugs. On the other hand, our set of CPS-specific bugs could be used to define mutation testing or fault-injection [51] strategies specific to CPS domains, as well as, to testing solutions specific to CPS. Our replication package, submitted as additional material for review and available on Zenodo [56], contains (i) the scripts developed to extract the data used for this research and (ii) the manual validated dataset of bugs occurring in CPSs. The paper is organized as follows. Section 2 details the study definition and planning. The CPS bug taxonomy is presented and discussed in Section 3. The study implications for developers and researchers are discussed in Section 4, while Section 5 discusses the threats to the study validity. Finally, Section 6 discusses related research, while Section 7 summarizes our findings, and outlines directions for future work. 2. Study Definition and Methodology The goal of our study is to analyze the root cause of bugs occurring in open-source CPSs. The perspective is of researchers developing suitable approaches to support the discovery, localization, and management of CPS-specific bugs. Also, the study results can be useful to developers in understanding the nature of bugs occurring in CPSs. The context of the study consists of 1,151 closed issues sampled from 14 open-source CPS projects hosted on GitHub. Specifically, we answer the following research question: **RQ: What types of bugs occur in open-source CPSs?** This research question focuses on qualitatively defining a taxonomy comprising of root causes for bugs occurring in CPSs. We aim at discriminating bugs specific to CPS from bugs similar to those also occurring in traditional (general-purpose) software. In the following, we detail the methodology adopted to answer our research question. 2.1. Methodology Overview Fig. 1 depicts the methodology we followed, which consists of four subsequent steps. First, (1) we performed an inception phase aimed at enriching our knowledge about the studied problem and determining a starting point for our taxonomy. Then, (2) we selected the pool of projects to be considered in the study and extracted issues from them. After that, (3) we performed the CPS bugs categorization, which involved four people (two annotators and two reviewers). Finally, (4) an independent annotator re-labeled the whole set of bugs, to limit subjectivity, and after having solved conflicts, the final taxonomy of CPS bugs was created. 2.2. Inception Phase As a first step, we needed to enrich our knowledge by identifying from previous literature, studies aimed at characterizing bugs in different software application domains. More specifically, as detailed in Section 6, we looked at previous bug taxonomies [23, 24, 27]. Among them, the closest (in terms of the domain) was the one proposed by Garcia et al. [23], which looked at bugs affecting two open-source autonomous vehicles (AV) systems. They classified the root causes of AV bugs into 13 categories, summarized in Table 1. By looking at their descriptions, we found such categories a suitable starting point for our work, mainly because they feature conventional bugs (e.g., missing condition checks and incorrect condition logic) but also numerical bugs, and bugs related to how the software interfaces with hardware components, which are likely to occur in CPSs. As a consequence, we refer to the categories detailed in Table 1 as a starting point for our investigation, while (i) adding further categories, and (ii) determining the extent to which those categories apply in the open-source CPSs relying on different domains. 2.3. Project Selection To properly derive a taxonomy aimed at covering root causes for bugs occurring in CPSs, we selected 14 open-source projects hosted on GitHub. To identify CPS-related projects, we used the GitHub query search feature. Since our goal is to identify projects belonging to different CPS domains, including the previously studied automotive domain [23], we experimented with CPS-specific GitHub queries: Drone, robot, and autonomous vehicles. In addition, we also considered arduino to explicitly target projects that likely design and manufacture single-board micro-controllers and micro-controllers kits for building digital devices. Starting from the initial set of projects, we applied the following selection criteria, ending up with a sample of 14 projects whose characteristics are summarized in Table 2: - **Project Popularity**: We sorted the results by stars to focus on popular repositories. Note that, while selecting projects solely based on stars has been criticized [11], this is not our only criteria. - **Programming languages**: We selected projects having as main programming languages C++ or Python since, while querying GitHub for projects belonging to the CPS domain, we realized that most of them use the selected languages. This is consistent with the findings from previous literature highlighting that CPS development is mostly performed using C/C++, with many complex CPSs that “do not allow the use of other languages” such as Java or Swift [44]. - **Use of GitHub issue tracker**: The projects must rely on GitHub for tracking their issues, so we only considered projects having at least 100 closed issues. - **Use of issue labels**: Since our goal is to classify bugs and not any type of issue (e.g., enhancements or new features), we only focused on projects that use labels for discriminating about different types of issues, accounting for those projects having at least 10 issues being related to bugs, and labeled as such. - **Active Projects**: We selected projects having at least 5 closed issues in the last 3 months of the observed period. The number of projects to consider as context for our study has been determined so that (i) we were able to sample and analyze enough issues per project (this would not be possible if analyzing a large number of projects and then sampling very few bugs from them), and (ii) we had a similar number of projects for each CPS domain. ### 2.4. CPS Bug Categorization Once having identified the projects of interest, we proceeded with the extraction of their GitHub issues data (i.e., title, description, labels, status, and comments) using Perceval [20]. As reported in Table 2, we downloaded a total of 32,333 issues of which 28,094 are closed issues. Once having filtered out all those closed issues that did not have a bug-related label, we ended up with a total of 3,713 issues relevant to our study. **Issue Sampling.** Since manually analyzing all bug-related issues in our dataset would be infeasible, we extracted a statistically significant sample to be analyzed. Specifically, we applied a stratified random sampling on each project, with a significance level of 95% and a confidence interval of ±2.4%, which led to the selection of 1,151 closed bug-related issues to be manually analyzed. The sample size ($SS$) is based on a formula for an unknown population [41]: $$SS = p \cdot (1 - p) \frac{Z_\alpha^2}{E^2}$$ and $SS_{adj}$ for a known population $pop$: $$SS_{adj} = \frac{SS}{1 + \frac{SS - 1}{pop}}$$ where $p$ is the estimated probability of the observation event to occur (assumed to be 0.5 if we do not know it a priori), $Z_\alpha$ is the value of the Z distribution for a given confidence level, and $E$ is the estimated margin of error (5%). **Preliminary Bug Categorization.** By following the recommendation from previous work [50], we choose to label only the issues where the discussion was not tangled to reduce the possibility of misclassifying their root causes. Moreover, since developers may assign inappropriate labels when opening and discussing issues [4, 25], we discard the issues that were not bugs, despite the label. In other words, we performed a first high-level manual filtering of the issues in our dataset, discarding issues that (i) were not bugs, despite the label; (ii) were not linked to a specific fix; and (iii) were duplicates of already analyzed issues. After this preliminary manual filtering, we ended Table 2: Characteristics of the analyzed projects. <table> <thead> <tr> <th>Project</th> <th>Domain</th> <th>Issues</th> <th>Closed Issues</th> <th>Bug-related Issues</th> </tr> </thead> <tbody> <tr> <td>Autoware-AI/autoware.ai</td> <td>Automotive</td> <td>1,030</td> <td>1,027</td> <td>49</td> </tr> <tr> <td>commaai/openpilot</td> <td>Automotive</td> <td>608</td> <td>562</td> <td>78</td> </tr> <tr> <td>ArduPilot/ardupilot</td> <td>Drones</td> <td>5,123</td> <td>3,613</td> <td>1,044</td> </tr> <tr> <td>PX4/PX4-Autopilot</td> <td>Drones</td> <td>5,875</td> <td>5,303</td> <td>1,197</td> </tr> <tr> <td>dronelab/Dronelab-Python</td> <td>Drones</td> <td>631</td> <td>311</td> <td>62</td> </tr> <tr> <td>mavlink/groundcontrol</td> <td>Drones</td> <td>3,873</td> <td>3,038</td> <td>92</td> </tr> <tr> <td>ros/roslab</td> <td>Robotics</td> <td>100</td> <td>99</td> <td>41</td> </tr> <tr> <td>carla-simulator/carla</td> <td>Robotics</td> <td>3,298</td> <td>2,933</td> <td>168</td> </tr> <tr> <td>cyberbotics/webots</td> <td>Robotics</td> <td>1,076</td> <td>912</td> <td>486</td> </tr> <tr> <td>bblanchon/ArduinoJSON</td> <td>Arduino</td> <td>1,346</td> <td>1,329</td> <td>136</td> </tr> <tr> <td>Arduino-IRemote/Arduino-IRemote</td> <td>Arduino</td> <td>505</td> <td>502</td> <td>17</td> </tr> <tr> <td>miguelbalboa/rfid</td> <td>Arduino</td> <td>334</td> <td>312</td> <td>17</td> </tr> <tr> <td>esp8266/Arduino</td> <td>Arduino</td> <td>3,367</td> <td>3,208</td> <td>12</td> </tr> <tr> <td>TOTAL</td> <td>-</td> <td>32,333</td> <td>28,094</td> <td>3,713</td> </tr> </tbody> </table> up with a set of 655 bugs to be used for deriving the taxonomy for CPS bugs. With respect to the initial population, this is a statistically significant sample with 95% confidence level and ±3.78% confidence interval. On the useful set of 655 issues, we performed a hybrid card-sorting approach [45], by considering the Garcia et al.’s taxonomy [23] as a reference starting point. Our card sorting consisted of the following steps: 1. **Annotation Phase**: We split the 655 issues into two sets. Two annotators independently evaluated the assigned set of issues and proposed labels for them. To identify the root cause of an issue, the annotators looked at the title, description, discussion, and source code change diffs associated with each linked fix. The labeling was performed by either reusing labels provided by Garcia et al. [23], thus adding new labels when necessary. 2. **Reviewing Phase**: Each labeled issue has been subsequently reviewed by a different annotator (not involved in the initial annotation phase), which confirmed or rejected the categories assigned in the previous step. In presence of disagreement, i.e., the new annotator rejected the previously identified category, a discussion was opened involving an additional annotator (not involved in the previous labeling). A decision was taken by applying a majority vote strategy among the participants of the discussion. Among the 655 analyzed issues, a discussion was needed in 93 cases. As an outcome of the overall bug categorization step, a preliminary version of the taxonomy was created. 2.5. **Definition of the Final Taxonomy** To guarantee the integrity of the labeled dataset, i.e., reducing possible subjectivity and bias, and of the emerging categories, i.e., removing potential redundancies from the preliminary version of the taxonomy, an additional annotator, not involved in the previous bug categorization step, re-labeled the previously manually validated issues independently. Then, the two labeled datasets have been compared to identify disagreements, that have been discussed and solved in a discussion session with two different reviewers. Besides performing a discussion to resolve disagreements, we computed the inter-rater agreement to determine to what extent annotators agreed by chance. That is, if the inter-rater agreement is too low compared to the agreement rate, this indicates that several cases of agreement could have (also mistakenly) occurred by chance. To determine the reliability of the manual labeling, we used the Cohen’s $k$ inter-rater agreement [16]. Specifically, the agreement has been computed at two different levels. First, we computed the agreement looking at whether or not a bug is CPS-specific, obtaining a percentage of agreement of $\approx 93\%$ and a Cohen $k = 0.84$, which indicates an almost perfect agreement between the annotators. Then, we looked at the percentage of agreement while assigning the high-level category of our taxonomy. In this case, the percentage of agreement is equal to 76% with a Cohen $k = 0.66$, representing a strong agreement. 3. Study Results Table 3 reports the root causes of bugs in the CPSs we analyzed. The taxonomy comprises 22 different root causes properly grouped into 8 high-level categories: 1. **Hardware** includes bugs whose root cause is in the hardware and its related software, e.g., faulty data sent by sensors, components not supported by the current implementation, or overflow of the physical storage on the real device; 2. **Network** includes bugs whose root cause is in the communication between the hardware and the software components in terms of connections and packets being lost or corrupted; 3. **Interface** groups bugs whose root cause resides in a misuse of the interface of the hardware devices, other software libraries and/or components, as well as, bugs inherited by third-party components integrated into the system; 4. **Data** groups bugs caused by the usage of an improper data structure, as well as related to its storage; 5. **Configuration** includes bugs due to a wrong build configuration process; 6. **Algorithm** groups bugs whose root cause is related to how the application logic is implemented, e.g., incorrect conditional expressions, incorrect numerical calculations or values, as well as misuse of memory in terms of strategies used for allocating or de-allocating it; 7. **Documentation** includes bugs that do not occur in the source code but rather in the documentation associated with the application, i.e., documentation is outdated with respect to the current version of the software; 8. **Others** groups those root causes that cannot be classified into any of the previous categories. Table 3 also reports, for each root cause, the number and percentage of bugs belonging to it discriminating between bugs that are specific to CPSs (e.g., related to hardware or its interfacing, or algorithms related to hardware controlling), as well as, generic bugs that are not CPS-specific. In the following, we discuss each category, reporting a short description together with representative examples for each specific root cause, and then outline our main findings including, whenever possible, implications for practitioners and researchers. The discussion starts from the categories not included in the classification of AV bugs by Garcia et al. [23] (i.e., Hardware and Network), and then considers the categories that abstract or specialize those from the previous taxonomy. 3.1. Hardware **Description:** We found five root causes dealing with hardware components being integrated with CPSs: (i) **HW not supported/not compatible** groups bugs that are generated by using a hardware component/device that is not supported or is not ### Table 3: Taxonomy of CPS bugs: number of bugs for each root cause, number (percentage) of CPS-specific bugs, and other bugs. <table> <thead> <tr> <th>Category</th> <th>Root Cause</th> <th># of (%)</th> <th># of (%)</th> </tr> </thead> <tbody> <tr> <td></td> <td>CPS-specific bugs</td> <td>other bugs</td> <td></td> </tr> <tr> <td>Hardware</td> <td>Energy</td> <td>2 (0.31)</td> <td>0 (0)</td> </tr> <tr> <td></td> <td>Faulty Sensor Data</td> <td>7 (1.07)</td> <td>0 (0)</td> </tr> <tr> <td></td> <td>Hardware Failure</td> <td>4 (0.61)</td> <td>0 (0)</td> </tr> <tr> <td></td> <td>HW Not Supported/Not Compatible</td> <td>11 (1.68)</td> <td>0 (0)</td> </tr> <tr> <td></td> <td>Memory</td> <td>4 (0.61)</td> <td>0 (0)</td> </tr> <tr> <td>Total Hardware</td> <td></td> <td>28 (4.28)</td> <td>0 (0)</td> </tr> <tr> <td>Network</td> <td>Connection/Communication</td> <td>5 (0.76)</td> <td>11 (1.68)</td> </tr> <tr> <td></td> <td>Packet Corrupted/Lost</td> <td>8 (1.22)</td> <td>1 (0.15)</td> </tr> <tr> <td>Total Network</td> <td></td> <td>13 (1.98)</td> <td>12 (1.83)</td> </tr> <tr> <td>Interface</td> <td>External</td> <td>27 (4.12)</td> <td>27 (4.12)</td> </tr> <tr> <td></td> <td>Internal</td> <td>6 (0.92)</td> <td>24 (3.66)</td> </tr> <tr> <td>Total Interface</td> <td></td> <td>33 (5.04)</td> <td>51 (7.78)</td> </tr> <tr> <td>Data</td> <td>Incorrect Data Structure</td> <td>0 (0)</td> <td>9 (1.37)</td> </tr> <tr> <td></td> <td>Not Persisted</td> <td>0 (0)</td> <td>2 (0.31)</td> </tr> <tr> <td>Total Data</td> <td></td> <td>0 (0)</td> <td>11 (1.68)</td> </tr> <tr> <td>Configuration</td> <td>Build Configuration</td> <td>2 (0.31)</td> <td>68 (10.38)</td> </tr> <tr> <td></td> <td>Wrong Parameters</td> <td>5 (0.76)</td> <td>13 (1.98)</td> </tr> <tr> <td>Total Configuration</td> <td></td> <td>7 (1.07)</td> <td>81 (12.36)</td> </tr> <tr> <td>Algorithm</td> <td>Assignment</td> <td>19 (2.9)</td> <td>58 (8.85)</td> </tr> <tr> <td></td> <td>Concurrency</td> <td>2 (0.31)</td> <td>5 (0.76)</td> </tr> <tr> <td></td> <td>Incorrect Condition Logic</td> <td>17 (2.6)</td> <td>35 (5.34)</td> </tr> <tr> <td></td> <td>Memory</td> <td>5 (0.76)</td> <td>16 (2.44)</td> </tr> <tr> <td></td> <td>Missing Condition Check</td> <td>25 (3.82)</td> <td>50 (7.63)</td> </tr> <tr> <td></td> <td>Numerical</td> <td>18 (2.75)</td> <td>23 (3.51)</td> </tr> <tr> <td></td> <td>Programming</td> <td>26 (3.97)</td> <td>90 (13.74)</td> </tr> <tr> <td>Total Algorithm</td> <td></td> <td>112 (17.11)</td> <td>277 (42.27)</td> </tr> <tr> <td>Documentation</td> <td>-</td> <td>1 (0.15)</td> <td>27 (4.12)</td> </tr> <tr> <td>Others</td> <td>-</td> <td>0 (0)</td> <td>2 (0.31)</td> </tr> <tr> <td><strong>OVERALL</strong></td> <td></td> <td><strong>194 (33.40)</strong></td> <td><strong>461 (66.56)</strong></td> </tr> </tbody> </table> Compatible with the CPS system; (ii) **Faulty Sensor Data** includes bugs due to sensors providing faulty values; (iii) **Memory** groups bugs generated by storage on physical devices; similarly, (iv) **Energy** includes bugs dealing with the power on physical devices; and (v) **Hardware Failure** groups bugs where the root cause of the failure is directly on the hardware component/device. Based on the above description, and as reported in Table 3, this category includes only CPS-specific bugs. **Discussion and Examples:** 11 out of 28 bugs belong to **HW not supported/not compatible**. Bug #2103\(^3\) in OPENPILOT points out the presence of a CAN bus error on a specific device (i.e., Rav4 Prime). After a detailed discussion, in which other users still experienced the same problem, the developer \(^3\)To access the bug description use https://github.com/$owner/$repo/issues/$issue_number. For this specific example it is https://github.com/commaai/openpilot/issues/2103. ended up stating: “This isn’t an issue - the RAV4 Prime isn’t listed as a supported car”. From a different perspective, still in OPENPILOT we found a different bug (#1813) where the problem is due to the usage of the wrong simulator, indeed, since that “some older HKG vehicles do not have FCA11msg” it is required to “use SCC12 for stock ADAS signals on cars that don’t have FCA11”. **Faulty Sensor Data** includes seven CPS-specific bugs. For instance, we found a case in which there is a GPS glitch (bug #14253 in ARDUINO), or, false and unrealistic airspeed measurements (bug #8980 in PX4-AUTOPILOT) resulting in an unsuccessful flight. We found four CPS-specific bugs belonging to Memory: consider bug #1662 in AUTOWARE.AI where the functionality crashes since the disk capacity has been filled up, or bug #2352 in ARDUINO where the misbehavior (i.e., empty logs list over MAVLink) occurs as a consequence of the SD card being full. We found two CPS-specific bugs belonging to Energy. As an example, in ARDUINO, the user experienced a crash together with a partial data freeze as a consequence of “a power failure. Power overload, not a software problem” (bug #6300). Finally, we found four bugs for which the root cause is directly related to the hardware component/device. For instance, bug #9738 in PX4-AUTOPILOT, after a very long discussion, has been closed stating that: “… it turns out that my board had hardfaulted ... and then it got “stuck” waiting for keyboard input on the console to clear the hardfault. The fault wouldn’t/didn’t clear itself over multiple reboots, leading to a “bricked” board.” **Main Findings:** Hardware-specific bugs are peculiar to our taxonomy, and, unsurprisingly, all of them are CPS-specific. Recognizing (and simulating) hardware failures has paramount importance in V&V. Also, developers should take particular care of hardware compatibility, especially for CPSs targeting multiple devices. Last, but not least, the interaction with the hardware makes particularly crucial the analysis of non-functional properties such as performance, memory, and energy consumption. 3.2. Network **Description:** Differently from the Garcia et al.’s work [23], we have identified a new root cause accounting for bugs occurring in the networking between software and hardware components. Indeed, the network plays a paramount role in many CPSs [42], e.g., in the Internet-of-Things (IoT) domain, or domains such as drones, satellite, automotive, etc. [3, 21, 55]. We discriminated between bugs dealing with (i) packets being lost or corrupted, and (ii) merely connection problems. Out of 25 bugs in this category, 13 are CPS-specific with most of them belonging to Packet Corrupted/Lost. **Discussion and Examples:** Bug #1696 in OPENPILOT is an example of Packet Corrupted/Lost, in which the fault is due to an improper parity bit and command being received in the message order (i.e., packet is corrupted). With the same root cause, we also found bug #4302 in ARDUINO, where there is a memory leak while doing repeated connections to a server, causing the loss of around 8KB for each connection. For what concerns bugs with Connection/Communication root cause, we found five out of 16 being CPS-specific. In ARDUINO we found a bug (#11398) where the problem experienced is related to “gimbal’s tilt control is overshooting badly” while using the ChibiOS environment. After a long discussion, a developer found that the root cause of the misbehavior is the latency while communicating through an i2c bus. A different problem has been reported in ARDUINO (bug #4060): once having properly completed a sequence of requests through the network, the user started to receive a “Connection Refused” error and while looking at the connection status it was lost. After those events, it was neither possible for the user to reconnect to the network. Main Findings: In several CPSs, networking plays a paramount role, and therefore can be the origin of bugs. The CPS monitoring infrastructure should therefore include network monitors. Moreover, V&V techniques may contemplate CPS misbehavior caused by network-specific aspects. 3.3. Interface Description: This category groups bugs dealing with a misuse of the interfaces among software and hardware components (External), as well as among different software components that may be both external software libraries being used in the CPS system (External), or modules and/or classes in the same CPS system (Internal). It is important to remark that when the reported issue affects a third-party library used by the system (i.e., inherited bugs), we did not exclude it, but labeled it as an external interface bug. 84 out of 655 (≈13%) bugs in our dataset belong to this category, with 33 of them being CPS-specific. Discussion and Examples: As regards the inheritance of bugs in third-party components being integrated into the software application (External), in PX4-AUTOPILOT we found a bug (#6546) dealing with GPS “jamming” that has already been reported as an issue in the library aimed at supporting the Intel Aero Platform. In other words, the bug has been inherited from the library being used while interfacing with GPS. However, in the External root cause, there are CPS-specific bugs dealing with the interface towards the hardware components. For instance, bug #8822 in PX4-AUTOPILOT reports a problem with the name of the rotors to use while connecting to the drone. In this case, it is required to use “rear” instead of “rear-right”. We also found cases where, as a consequence of updates in the firmware, the CPSs stop to work as expected. For instance, bug #226 in ARDUINO reports: “ESP8266HTTPUpdateServer fails to check sketch size versus available space” and, while looking at the fix we realized that the bug has been raised as a consequence of the new firmware being released (i.e., make updater fail if there is not enough space to fit the new firmware). Finally, we found functional bugs in the drivers’ implementation (i.e., interface component). As an example, the issue report #11854 in PX4-AUTOPILOT points out that the driver connected to the temperature sensor is not able to provide the right results even if the sensor values are not corrupted. By looking at the issue report, we found that the problem is in the logic implemented for filling the report buffer with the temperature values coming from the sensor. To fix the problem, the developer changed the code while considering the value reported in the datasheet of the sensor measuring the temperature values (i.e., BMI055). Concerning the interface with Internal software components, we found only six CPS-specific bugs out of 30 bugs. Very often developers tend to rely on the wrong API (or misuse certain APIs) while accomplishing a specific task. For instance, bug #66 in DRONEKIT-PYTHON states that there is a wrong usage of the API used for giving a command: the developer relies on the “VehicleMode” class instead of using the “Command” class. Main Findings: Interfacing bugs are challenging for developers coping with CPSs. Surely, testing efforts should focus on this aspect. Moreover, when hardware or firmware changes, there may be a lack of documentation or code examples for developers. 3.4. Data Description: This category groups bugs whose root cause is in the way data is stored (Not Persisted) and handled (Incorrect Data Structure) by the application logic of the system. Quite surprisingly, we did not find any bug that is specific to the CPS domain, and, as reported in Table 3, only a few bugs (1.68%) belong to this category. Discussion and Examples: Two bugs are related to the data persistence, and nine deal with incorrect data structures being --- 1https://github.com/esp8266/Arduino/pull/2405 used for modeling purposes. An interesting example is bug #60 in ARDUPILOT stating that some settings are not stored permanently while they should (i.e., “video device input setting not stored permanently”). The latter comes with unexpected behavior summarized by the user as: “the video is replaced with a white box and the desired input device needs to be selected again”. **Main Findings:** We did not find many data-related bugs, and those found are not CPS-specific. Although data exchange and storage have paramount importance for CPSs, we found that problems tend to occur at the interface level rather than being related to data structure management. Therefore, developers (and testers in particular) should focus their effort on that. ### 3.5. Configuration **Description:** This category groups bugs related to (i) how the build process is configured, and in particular how the commands or the environments have been configured; or (ii) how the application run-time parameters are configured. Even if ≈ 92% of them are not CPS-specific, we found seven bugs, i.e., two in Build Configuration and five in Wrong parameters that are strictly related to the CPS domain. **Discussion and Examples:** In PX4-AUTOPILOT we have a bug (#2229) in which a user found that a driver for a specific hardware component (i.e., PCA8574) is included in the startup script even if it is never compiled. As a solution, the developer, instead of completely removing it from the script, only commented it out since “it might be useful down the road”. Moving to the Wrong parameters root cause, we refer to bug #1017 in ARDUINOJSON where, after a crash of the board, the watchdog timer resets correctly but the board will not restart automatically. After a long discussion in which the developer provides additional details on how to properly connect the hardware components to the software application, a hard reset was forced. The latter translates into modifying the parameters being used while running a specific command during configuration. --- ### 3.6. Algorithm **Description:** This category includes 389 bugs (of which 112 are CPS-specific) mainly due to how the application logic is implemented. In this category we have seven different root causes: (i) Assignment deals with variables that are wrongly assigned and/or initialized; (ii) Missing Condition Check (MCC) includes bugs related to a logic being only partially implemented; (iii) Incorrect Condition Logic groups those bugs in which the condition logic has been improperly implemented; (iv) Numerical includes bugs due to incorrect numerical calculations and values/ranges; (v) Memory accounts for bugs in the logic used by the application while allocating or de-allocating the memory; (vi) Concurrency groups bugs that are related to a misuse of the concurrency-oriented structures; and (vii) Programming deals with bugs that cannot be assigned to only one of the other root causes in the same category. **Discussion and Examples:** 77 out of 389 algorithm bugs are related to Assignment. As an example, in ARDUPILOT we found a bug (#801) related to how the vertical acceleration was being set (i.e., assigned the first time being used) and used. This has been confirmed by the fix commit stating: “Vehicle was not reaching target climb or descent rate because of incorrectly defaulted acceleration”. In PX4-AUTOPILOT, instead, we found a bug (#1098) manifesting during compilation due to a parameter not being initialized (i.e., “warning: ’alt_sp’ may be used uninitialized in this function”). As reported in Table 3, 75 bugs are generated from conditions that are not considered and handled (i.e., Missing Condition Check). As expected, 66.7% of them are not CPS-specific and can occur in systems independently on whether or not they --- 1https://github.com/esp8266/Arduino/pull/5433/files interact with hardware devices and simulators. Among the 25 CPS-specific bugs, in ARDUPILOT a bug (#2620) discussing that the value outputted by the barometer sensor in a specific condition is not handled by the application. Specifically, the user stated: “the barometer altitude became NaN [...] but the EKF probably continued to use the barometer altitude because the EKF’s readHgtData method doesn’t check the health of the baro”. Fixing this problem requires checking the status of the sensor before consuming its value. As regards the Incorrect Condition Logic, we identified 52 bugs belonging to this root cause of which 17 are CPS-specific. For instance, in ARDUPILOT we found a bug (#5660) in which the user discovered that while stopping the propeller movement, the Revolutions per Minute (RPM) sensing does not provide any new measurement, so no updates are processed by the Mission Planner. The latter implies that “if your engine dies mid-air, you will never get a 0 RPM”. By carefully analyzing the issue, the developer confirmed that “if you stop getting signals, or get them slower than 1 Hz, then it sets the “quality” to zero and the healthy goes false and it will no longer log it.” The bug was fixed by changing the condition being checked to always log RPM when enabled and not only when healthy. For Numerical bugs we found that 44% of them (18 out of 41) are CPS-specific. Compared to the work by Di Franco et al. [18], we only accounted for incorrect numerical calculations and values/ranges. Specifically, we considered all those cases where (i) there is a division by zero, (ii) the value may not have a precise representation in memory because of rounding errors, and (iii) the value is wrongly evaluated, i.e., there is an error in the formula being used to determine the value. For instance, in DRONEKIT-PYTHON we have a bug (#298) manifesting in a race condition due to the usage of a value wrongly defaulted as “None”, before correctly instantiating it. As a result, the application raises a divide by None error. Dealing with this problem means properly assigning the default value instead of relying on “None”. 21 out of 389 algorithmic bugs deal with how the Memory has been used by the application logic. Note that this root cause is different from the one we have in the Hardware category. For instance, bug #5670 in ARDUPILOT where the user reports: “When changing baud rates buffers should be cleared” otherwise having data in the buffer that has not yet been parsed results in falsely detecting valid GPS instances. In terms of misuse of concurrency-oriented structures, we only found seven cases in our dataset, with only two being CPS-specific. One likely reason for such a smaller proportion is that most of the systems we consider do not use concurrency at all (e.g., Arduino does not have native support for threads). An example of a concurrency bug was found in DRONEKIT-PYTHON (bug #12), in which the root cause has been highlighted by the developer as “… this is caused by race conditions caused by threading. Spin-waiting on two separate threads for parameter confirmation causes a lot of “BAD DATA” messages to pop out [in a non-deterministic manner]”. Finally, most of the bugs in our manually analyzed sample belong to Programming (116 out of 389) with 26 being specific to the CPS domain. As an example, in PX4-AUTOPILLOT we found bug #5446 dealing with the flakiness of a test command for Arduino, i.e., “The 9250 test command is flaky, interfering with the normal operation of the sensor.”. **Main Findings:** Algorithmic bugs in CPSs tend to be similar to those occurring in other types of software systems. Therefore, existing mutants taxonomies can be used to seed some representative faults. However, the way failures manifest (e.g., flaky effects on the hardware or actuators) can make these bugs more subtle to detect and potentially dangerous. This should encourage developers to make heavy use (while caring about overhead) of logging and assertion. Some root causes, such as concurrency, are generally avoided “by construction”, i.e., by not supporting concurrency at all. ### 3.7. Documentation **Description:** This category includes bugs dealing with the system’s documentation. As previous research has pointed out, documentation issues are as important as program-related issues [2, 14]. These problems mostly occur independently from the application domain. Indeed, as reported in Table 3, we found 28 bugs belonging to this category (≈ 4%), with only one being CPS-specific. **Discussion and Examples:** In this context a perfect alignment between the system (which includes not only software but also hardware) and the related documentation is crucial. In general, this has been the subject of various studies and approaches [47, 5, 59, 53, 60]. In the context of CPSs, we need to pay attention to properly connecting hardware and software components focusing on how and whether the functionality may change based on the characteristics of the hardware. An example is bug #6522 in ARDUPILOT where developers, once having struggled to find the root cause of a bug dealing with parameters configuration being lost, realized that: “if you downgrade from Plane 3.8 to an earlier version of the plane then any changes that were made to the RC_* parameters will be lost in the earlier version and that upgrading back to 3.8 will not copy over any param changes that happened in the earlier version. This is only an issue if the user decides to downgrade”. As a consequence, it was required to add into the release note a warning that “if users downgrade back to 3.7, their RC settings will likely be wrong, up to inverted servos and crashes”. **Main Findings:** While documentation inconsistencies in CPSs are similar to other systems, CPS documentation must align with the characteristics and changes of software and hardware components and CPS specific APIs. That is, changes to CPS hardware devices, components, or sensors can trigger changes in the software documentation too. 3.8. Comparison with domain-specific taxonomies Our work has similarities, but also differences, with respect to previous work on bug taxonomies for specific CPS domains, and in particular the AV taxonomy by Garcia et al. [23] and the Unmanned Aerial Vehicles (UAV) taxonomy by Wang et al. [52]. Differently from previous literature, we made a distinction between bugs specific to CPS domains, and bugs that may also occur in conventional software applications. This is relevant especially when defining domain-specific mutation testing approaches, aimed at capturing bugs that may not be captured using conventional mutation operators. Our results highlight that ≈ 33% of the studied bugs are CPS-specific, indicating that peculiar V&V approaches are required. Specifically, the root causes of bugs identified by Garcia et al. [23] in the automotive domain can also manifest in different CPS domains (e.g., Arduino or drones). However, our taxonomy highlights the presence of two new bug categories not mentioned by Garcia et al.’s taxonomy [23]. Specifically, we found bugs originating directly from the hardware devices (e.g., faulty sensors, hardware failures, or energy drain), but also from the network infrastructure and protocol. Furthermore, we specialized the Data category to account for bugs dealing with data persistence (i.e., *Not Persisted*), and we included the bugs inherited from third-party components in the External Interface category. For what concerns the Wang et al. work [52], identifying eight root causes of UAV specific bugs together with challenges in detecting and fixing them, the main commonalities with our taxonomy are: - The “Hardware support” category in their taxonomy, is, in our taxonomy, a sub-category of the Hardware category. However, while Wang et al. found that the “hardware support” bugs in UAV systems are no different from those in traditional systems, all the bugs in our category are CPS-specific. - The “Correction” category of Wang et al., dealing with correction of sensor data, can be potentially mapped onto our Programming sub-category of the Algorithm category, where developers could misuse data coming from sensors without properly cleaning them. - The “Math” category in their case, can be mapped onto our Numerical sub-category of the Algorithm category. • The “Parameter” category of Wang et al. can be potentially mapped onto our (more general) Interface category. Besides that, the rest of Wang et al. taxonomy accounts for categories that are specific to the UAV domain (e.g., "Limit", "Priority", or "Consistency"). These categories of bugs have a lower level of abstraction with respect to the root causes in our taxonomy, which makes them not generalizable/reusable to generic CPS domains. For instance, Bugs having as root causes inconsistencies between hardware and software in a UAV system, could not be generalized in other CPS domains. In summary, our taxonomy is more generic than Wang et al. taxonomy and its categories do not refer to any specific CPS domain. In other words, we focus on designing a CPS bugs taxonomy from a different perspective, which makes it broader and more re-usable in different CPS domains. Indeed, our taxonomy is less specific to previously investigated domains (e.g., UAVs and automotive), but also more comprehensive, since bugs in our taxonomy also cover bug types not observed in previously studied CPS domains. 4. Implications This work can have relevant implications for developers and researchers. For what concerns developers, the elicited CPS bug taxonomy highlights specific problems that need to be carefully monitored in CPS development. These include, for instance, the need for coping with multiple hardware versions, which could cause incompatibilities. Also, it is of paramount importance to identify symptoms of hardware failures (e.g., broken sensors) so that they can be properly handled by the software. The presence of bugs originating from hardware-specific problems highlights the need for complementary software and hardware (which may or may not be available) knowledge in a project. Finally, to enable the detection and fixing of CPS bugs during the evolution of CPSs, developers should focus on properly configuring CI/CD pipelines aimed at integrating and testing different combinations of drivers/hardware devices in diversified testing scenarios. Of course, we expect that solutions for monitoring and detecting CPS bugs can vary between CPS domains. For what concerns researchers, this work triggers activities towards better testing and analysis of CPSs. First and foremost, given the identified bug taxonomy, it can be used to derive higher order [29] CPS-specific mutation operators. For example, bugs related to faulty-sensor data could lead to mutants that artificially change the sensor inputs towards a faulty value, or else, change the source code by omitting the input correctness check. More complicated would be dealing with memory-related problems, which may require to be simulated by perturbing the configuration of simulators during the testing process. Interface-related mutants could be produced as higher-order mutants from already existing mutants—e.g., those from object-oriented language mutants [32]—by modifying the communication between CPS and devices, for example altering the ordering of method calls and/or passed parameters. Finally, network communication mutants may include, among others, mutants aimed at perturbing the exchanged packets, similarly to what was previously proposed by Xu et al. for web service testing [54]. As explained in previous literature [29], higher order mutants may subsume trivial operators, yet have been shown to be harder to kill than their constituents. Also, the work could foster the development of specific static analysis tools, looking for CPS-specific recurring problems. Finally, complementary empirical research could be directed to investigate the difficulty (e.g., duration) to fix CPS-specific bugs, and to develop tools guiding developers in allocating the appropriate development effort to various types of CPS bugs. In the context of CPSs, achieving a deep knowledge of CPS bugs and their root causes would facilitate the development of better approaches and tools to facilitate their reproduction. Specifically, being able to reproduce a bug is crucial during bug triaging and debugging tasks [8, 26, 62]. Researchers proposed several automated solutions to generate test cases reproducing the crashes of software-only systems [6, 15, 30, 38, 43], focus- ing on the problem of generating the program execution state that triggered a crash in the field. Fixing or addressing CPS-specific bugs and automatically assessing the correctness of the CPS behavior represent a critical challenge. Our investigation has highlighted types of CPS bugs related to the uncertainty of CPSs behavior. Hence, future studies should look more on safety-related bugs due to the uncertainty of CPS behavior, concerning for instance the CPS initialization, or concerning potential CPS misbehavior. This topic has been recently studied in the automotive domain [9, 10, 46], yet it requires further investigations in other CPS domains. 5. Threats to Validity Threats to construct validity concern the relationship between theory and observation. Those are mainly due to the imprecisions in issue classification, e.g., as it is reported in the issue tracker [4, 25], and to the subjectivity/error-proneness of the manual classification. We mitigated both threats with a multi-stage manual classification detailed in Section 2. Another threat could be related to how our taxonomy has been obtained, especially because we started from the taxonomy proposed by Garcia et al. [23] rather than creating a new taxonomy from scratch. On the one hand, this allows us to build from past experience, especially because of the related domain. On the other hand, there could have been the risk of repeating previous mistakes. This risk has been partially mitigated since every time it was necessary to classify a bug, we determined whether it fitted previous categories, or whether it would have been useful to create new (sub) categories. Threats to conclusion validity concern the relationship between treatment and outcome. As described in Section 2, to achieve a reliable classification of the analyzed bugs, we performed multiple annotation rounds, and then computed the Cohen’s $k$ inter-rater agreement. Threats to internal validity may concern the cause-effect relationships between the investigated bugs (effects) and their root causes inferred from the fixes and discussions. We looked at discussions as well as fix change diffs, which could be helpful to infer bugs’ root causes. Furthermore, while as explained in Section 2.4 we have excluded from the analysis bugs with multiple root causes, these are likely part of the bugs occurring in a software projects. In other words, we have focused our analysis on the occurrence of single causes, while in practice there may be cases in which multiple root causes co-occur to determine a bug. Further analyses towards the co-occurrence of root causes may therefore be desirable. Finally, threats to external validity concern the generalization of our findings. Although the number of analyzed issues (1,151) is relatively large for a multiple-person manual analysis, and although they have been sampled from 14 projects belonging to different domains, by no means they can be generalized to the universe of open-source CPSs. Therefore, further replications are desirable, in both open-source but also in industrial contexts, to assess the generalizability of the proposed taxonomy. 6. Related Work This section discusses the literature concerning similar studies and taxonomies about bugs related to different domains. Dealing with software bugs involves significant costs for software organizations [37]. For this reason, researchers have investigated the nature, root causes, and symptoms of bugs affecting different types of application domains. Gunawi et al. [24] presented an extensive exploration of issues and bugs associated with cloud applications. According to them, the dominant aspects of cloud-related bugs are associated with performance, security, Quality of Service (QoS), reliability, and consistency. This means that cloud application bugs have different characteristics and are typically more difficult to detect (e.g., bugs involving distributed systems) than bugs occurring in more traditional systems [24, 34, 36]. Linares et al. [50] proposed a framework for improving the mutation testing of Android applications. To achieve this goal, the authors systematically devised a taxonomy of 262 types of Android faults grouped into 14 categories by manually analyzing 2,023 software artifacts from different sources (e.g., bug reports, commits). Humbatova et al. [27] introduced a large taxonomy of faults in Deep Learning (DL) systems by manually analyzing 1,059 artifacts gathered from GitHub commits and issues of projects that use the most popular DL frameworks (i.e., TensorFlow, Keras, and PyTorch) and from related Stack Overflow posts. In a follow-up work, Jahangirova et al. [28] developed a mutant taxonomy for DL. Fischer et al. [22] studied dependency bugs in the Robot Operating System (ROS), which makes their work more specific to the robotic domain and focused on a specific bug type, compared to our study. Wang et al. [52] studied Unmanned aerial vehicles (UAVs) bugs by manually analyzing 569 bugs from two open-source GitHub repositories (i.e., PX4 and Ardupilot). In this sample, they found 168 UAV-specific bugs that were related to eight different types of root-causes. Wang et al. also summarized challenges for detecting and fixing the UAV-specific bugs. With respect to our work, which is more general, Wang et al. perform an in-depth analysis of problems specifically occurring in UAVs. Garcia et al.’s work [23] investigated the bugs affecting two autonomous vehicles (AV) simulation tools. Specifically, they investigated the frequency, root causes, symptoms, and location (e.g., components) of bugs affecting such systems. In this paper, we considered the Garcia et al.’s taxonomy as a reference starting point to design our taxonomy. To the best of our knowledge, our study represents the first work providing a taxonomy that aims at identifying the root causes of the mistakes made by developers while developing a wide set of CPSs. Since requirement specifications of CPSs are typically expressed using signal-based temporal properties, Boufaied et al. [12] presented a taxonomy of the various types of signal-based properties and provide, for each type, a comprehensive and detailed description as well as a formalization in temporal logic. Hence, they also reported on the application of the taxonomy to classify the requirements specifications of an industrial case study in the aerospace domain. Finally, Delgado-Perez et al. [17] applied mutation testing in the nuclear power domain, showing how such a technique can be promising for evaluating test suite effectiveness and to achieve good fault detection. This shows the applicability of mutation-testing in CPS-related domains. At the same time, following the results of previous studies on mutation testing effectiveness [31], as well as previous studies on domain-specific mutants [50], it may be necessary to customize the taxonomies of mutant operators. Therefore, creating a taxonomy of CPS-specific bugs represents a starting point towards that direction. 7. Conclusions and Future Work This paper studied the root causes of bugs occurring in open-source Cyber-Physical Systems (CPSs). By using a hybrid card-sorting strategy [45], we have manually analyzed a statistically significant sample of 1,151 issues (of which 655 were classified as bugs) from 14 open-source projects hosted on GitHub, developed using C++ and Python, and belonging to different domains, i.e., Arduino, automotive, robotics, and drones. Our analysis reveals that ≈ 34% of the classified bugs are CPS-specific, most of which are related to hardware, networking, or interface. The inspection of root causes for samples of such bugs suggests different ways in which developers could improve Verification & Validation, but also support the comprehension and evolution of CPSs. Future work aims at replicating this work in industrial contexts (belonging to different domains), at investigating whether there are recurring (and, possibly, reusable) patterns for certain bug categories, and at proposing specific mutation operators and tools for CPSs. Acknowledgements We thank Remy Egloff, Jan Willi, Davide Fontanella, Stefano Anzolut for their help in the labeling of GitHub issues. We gratefully acknowledges the Horizon 2020 (EU Commission) support for the project COSMOS (DevOps for Complex Cyber-physical Systems), Project No. 957254-COSMOS. References
{"Source-Url": "https://digitalcollection.zhaw.ch/bitstream/11475/25591/3/2022_Zampetti-etal_Empirical-characterization-of-software-bugs-OS-CPS.pdf", "len_cl100k_base": 13371, "olmocr-version": "0.1.49", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 57138, "total-output-tokens": 15972, "length": "2e13", "weborganizer": {"__label__adult": 0.00037598609924316406, "__label__art_design": 0.0004320144653320313, "__label__crime_law": 0.00034880638122558594, "__label__education_jobs": 0.0013322830200195312, "__label__entertainment": 8.83340835571289e-05, "__label__fashion_beauty": 0.0002067089080810547, "__label__finance_business": 0.0002803802490234375, "__label__food_dining": 0.0002884864807128906, "__label__games": 0.0010747909545898438, "__label__hardware": 0.0031337738037109375, "__label__health": 0.0004930496215820312, "__label__history": 0.0003266334533691406, "__label__home_hobbies": 0.00015866756439208984, "__label__industrial": 0.000469207763671875, "__label__literature": 0.0003581047058105469, "__label__politics": 0.0002112388610839844, "__label__religion": 0.0004198551177978515, "__label__science_tech": 0.06719970703125, "__label__social_life": 0.00010949373245239258, "__label__software": 0.01154327392578125, "__label__software_dev": 0.91015625, "__label__sports_fitness": 0.0002582073211669922, "__label__transportation": 0.0007390975952148438, "__label__travel": 0.00017023086547851562}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 65032, 0.04323]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 65032, 0.38084]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 65032, 0.91098]], "google_gemma-3-12b-it_contains_pii": [[0, 3095, false], [3095, 7033, null], [7033, 11117, null], [11117, 13708, null], [13708, 16071, null], [16071, 19553, null], [19553, 23394, null], [23394, 26859, null], [26859, 30533, null], [30533, 34660, null], [34660, 38521, null], [38521, 42811, null], [42811, 46903, null], [46903, 51163, null], [51163, 55362, null], [55362, 59421, null], [59421, 65032, null], [65032, 65032, null], [65032, 65032, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3095, true], [3095, 7033, null], [7033, 11117, null], [11117, 13708, null], [13708, 16071, null], [16071, 19553, null], [19553, 23394, null], [23394, 26859, null], [26859, 30533, null], [30533, 34660, null], [34660, 38521, null], [38521, 42811, null], [42811, 46903, null], [46903, 51163, null], [51163, 55362, null], [55362, 59421, null], [59421, 65032, null], [65032, 65032, null], [65032, 65032, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 65032, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 65032, null]], "pdf_page_numbers": [[0, 3095, 1], [3095, 7033, 2], [7033, 11117, 3], [11117, 13708, 4], [13708, 16071, 5], [16071, 19553, 6], [19553, 23394, 7], [23394, 26859, 8], [26859, 30533, 9], [30533, 34660, 10], [34660, 38521, 11], [38521, 42811, 12], [42811, 46903, 13], [46903, 51163, 14], [51163, 55362, 15], [55362, 59421, 16], [59421, 65032, 17], [65032, 65032, 18], [65032, 65032, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 65032, 0.18462]]}
olmocr_science_pdfs
2024-11-25
2024-11-25
2974930b4ebccc6f847faa761d7256ea014009ac
Supporting Informal Design with Interactive Whiteboards Nicolas Mangano¹, Thomas D. LaToza¹, Marian Petre², André van der Hoek¹ ¹ Department of Informatics University of California, Irvine {nmmangano, tlatoza, andre}@ics.uci.edu ² Centre for Research in Computing The Open University m.petre@open.ac.uk ABSTRACT Whiteboards serve an important role in supporting informal design, providing a fluid and flexible medium for collaborative design. Interactive whiteboards offer the potential for enhanced support for manipulating content, managing sketches, and distributed work, but little is known about how this support affects the practice of informal design. To understand the opportunities and challenges, we first conducted a literature review, identifying 14 behaviors that occur during informal design. We then designed an interactive whiteboard system to support all of these behaviors and deployed the system to three groups of designers. Through usage logs and interviews, we examined the effects of interactivity on whiteboard use across a wide spectrum of design behaviors, identifying ways in which interactive whiteboards support the practices used in physical whiteboards and where they enable designers to work more effectively. Author Keywords Design; sketching; interactive whiteboard; informal design ACM Classification Keywords D.2.2 [Software Engineering]: Design Tools and Techniques – Computer-aided software engineering (CASE). INTRODUCTION Interaction designers and software developers generating and refining ideas engage in informal software design, turning to the whiteboard rather than tools for formal notations for the flexibility and fluidity it provides [6]. Yet while designers wish to manipulate content in more sophisticated ways than adding and erasing strokes [11], physical whiteboards remain a passive medium lacking active support for design. In response, nearly three decades of research [31, 18] has explored the design of interactive whiteboards, investigating approaches for sketch recognition [5, 21, 16, 9, 7], sketch management [31, 32, 26, 25, 13, 19, 3, 12], and distributed sketching [19, 14, 15, 29]. Yet interactive whiteboards are not widely used in practice [17]. We set out to understand the opportunities and challenges that interactive whiteboards afford in supporting informal software design. What behaviors are important for an interactive whiteboard to support to provide increased utility? How can interactive whiteboards effectively support these behaviors? How does supporting these behaviors impact the practice of informal design? What challenges remain inherent in the medium afforded by interactive whiteboards? We first conducted a review of the software design literature, identifying 14 behaviors important to support in informal design. We then designed a single unified tool – Calico – intended to preserve the fluidity and flexibility of the whiteboard while more effectively supporting the full range of sketching, navigating, and collaboration behaviors we identified. Finally, we conducted a field deployment of Calico to three groups of designers, recording usage logs and interviewing designers about their experiences. Our results illustrate the breadth and diversity of informal design at the whiteboard. Designers used Calico to create a wide range of sketches (e.g., Figure 1). The contexts in which designers worked – the nature of the design problems they faced, whether they were collocated or distributed – led to different usage of the features provided. A key benefit of interactive whiteboards was the infinite number of canvases they can provide, allowing designers to consider more alternatives and maintain a record of their design. Enabling designers to express relationships between canvases allowed designers to consider their design at a meta-level, providing context with which to interpret and reconstruct past designs. Our results also identified behaviors that are important to more effectively support, such as juxtaposing sketches and identifying marks in collaborative settings. Previous work presented an earlier version of Calico [23]. This paper presents a system redesigned from scratch to support not 4 but 14 distinct design behaviors (including distributed sketching) and a field deployment of its use. Other work has examined its use in the classroom [22]. RELATED WORK Decades of research into interactive whiteboards has explored a variety of approaches, including sketch recognition, sketch management, and support for distributed sketching (see Johnson et. al. [18] for a review). Other work has focused on understanding the use of groupware on large displays – including for distributed sketching – in practice. Sketch recognition systems interpret a user’s strokes, translating them into a formal object. Early systems used a predefined formal notation for interpreting sketches, such as UML diagrams [5] or interface mockups [21], using the rules of the notation to provide feedback. Later systems explored user-expandable notations [16] and increased flexibility by delaying interpretation until desired [9], sometimes even while retaining a sketchy appearance [7]. Many systems have explored support for managing the many and varied sketched artifacts that are produced during meetings. Early approaches organized sketches using a filmstrip [31], hyperlinks [32], or hierarchical perspectives [26]. Later work automated particular aspects of managing sketches by automatically grouping clusters of sketches in close spatial proximity [25], shrinking sketches when moved to the periphery [13], or using metaphors such as Post-It Notes to organize and relate sketches [19]. Other systems capture and present the history of interactions with a whiteboard as a tree of storyboards [3] and allow designers to navigate a network visualization of canvases [12]. Several systems have also explored techniques for supporting synchronous and asynchronous design amongst collocated and distributed designers. Tele-board [14] is a distributed whiteboard and Post-It Note tool that allows designers to generate sticky notes from remote locations, group them, and review whiteboards in a history viewer. Designer’s Outpost [19] helps communicate distributed designers’ gestures and body language using shadows on the whiteboard. Team Storm [15] allows designers to sketch in either private spaces or public spaces, allowing designers to interact with and provide feedback on others’ sketches. Gambit [29] allows designers to use a variety of devices together including large displays, laptops, tabletops, and phones. A few studies have investigated the impact of groupware systems for supporting design with large format displays on practice. A field deployment of Tele-Board [14] — using traditional computers rather than an interactive whiteboard — found that moving between synchronous and asynchronous modes of work allowed designers to use the system to prepare for meetings and saved time during meetings, as designers did not need to wait to sketch their ideas. Another study [17] examined the use in practice of several large-display groupware systems for informal collaboration, communication, and awareness. The study suggested the importance of supporting critical tasks, making the system’s value evident, supporting a breadth of collaboration practices, deployment in visible ways, low barriers to use, and having a core group champion the system. Our work builds on these studies, focusing specifically on the impact of interactive whiteboards on informal design. DESIGN BEHAVIORS We reviewed the software design literature and identified 14 behaviors that occur during design at the whiteboard. How Designers Sketch Designers draw different kinds of diagrams. To explore a design problem, software designers sketch many different types of diagrams, often within the same canvas [1, 8], enabling designers to explore an issue from different angles. Designers draw what they need, and no more. Few sketches are created with extensive detail; rather, designers create sketches with the detail and notation necessary to help them reason [33] or to reinforce what they wish to communicate within the design session [33, 28]. Working with low detail enables sketches to be created quickly and modified easily, providing rapid feedback [6, 28]. Too much structure imposed by a formal notation too soon can create unconscious barriers to change, resulting in a less exploratory and broad search for solutions [34]. Designers refine and evolve sketches. The level of detail designers require grows as designers expand their ideas [27]. Refinement is not uniform across a design: portions may exist at varying levels of maturity [28]. Designers appropriate existing sketches, adding new notational elements to capture decisions as they become more committed [11]. For example, designers appropriate lists, evolving them into class diagrams by first introducing boxes to denote entities and then lines to record relationships between entities (Figure 2). Evolving sketches is unplanned, occurring in response to the needs of the design process [23]. Designers use impromptu notations. Designers work not only with formal notations (e.g., UML), but deliberately break with these to capture ideas in the moment [11]. Be- Beyond annotations and minor deviations, designers sometimes adapt whole notations on the fly, often to describe a problem domain for which there is no standard. **How Designers Navigate Sketches** **Designers work with different perspectives.** Designers use sketches of varying types to present multiple perspectives on a design, making details hidden in one perspective pronounced and easier to understand in another [28]. For example, in designing a user interface component, designers simultaneously work with views of the user interface and a UML model describing its data model. **Designers work with alternatives.** Designers generate sketches of competing alternatives, allowing them to manage their focus, compare alternatives, weigh their tradeoffs, and synthesize alternatives into new alternatives [24, 4]. **Designers work with sketches at different levels of abstraction.** As designs are often hierarchical, designers work with sketches spanning levels of abstraction, including sketches of user interfaces and architecture [8, 28]. **Designers perform mental simulations.** Mental simulation provides insight into the consequences of a design, allowing designers to “interrogate” their design by testing it with hypothetical scenarios and inputs, often annotating their sketches [35]. For example, while discussing the logic cars use to move through intersections, a designer may simulate the car’s path by moving his finger along a path through a map while simultaneously enumerating the logic required to implement this behavior. Mental simulations help to discover implicit assumptions and flaws in a design [28]. **Designers juxtapose sketches.** Designers often juxtapose sketches spanning perspectives, alternatives, and abstractions to reason about how a design might work, using information from one to identify inconsistencies, omissions, and mistakes in others [28]. For example, designers may use a data model and map to understand how a car object is passed between entities as it travels through an intersection. **Designers review their progress.** During a design session, designers sometimes pause to take a step back and consider the progress they have made and what they have yet to do [23]. For example, they may return to requirements lists, marking off those they have been addressed, enumerating those yet to be addressed, and adding additional items. **Designers retreat to previous ideas.** When designers become stuck or exhaust an alternative, designers may choose to return to a previous state of the design (and its sketches) [35]. Returning to past designs may bring new insight and a matured understanding to explore the past ideas further. **How Designers Collaborate with Sketches** **Designers switch between synchronous and asynchronous work.** Design at the whiteboard often occurs synchronously, with designers working together on a single aspect of the design [10]. Designers sometimes break away to asynchronously explore an idea by themselves [14]. **Designers bring their work together.** After working asynchronously, designers may need to integrate separate ideas into a new unified design. This may involve simply combining parts of several sketches or generating a new design that borrows conceptual aspects [11]. **Designers explain their sketches to others.** When returning from independent work or when drawing on behalf of a group, designers must synchronize their mental models of the design by explaining their work to others [11]. Explanations are often supplemented by pointing or drawing on sketches, guiding attention to specific parts of a sketch. **CALICO** Designers use physical whiteboards for their fluidity and flexibility. Our key goals in designing Calico were to maintain this fluidity and flexibility — allowing designers to focus on the content of their sketch rather than the tool used to make it — while enabling users to discover interactive features that help them to design more effectively. Building on experiences with a previous version of Calico [23], this paper presents a new system redesigned and implemented from scratch to support not 4 but 14 distinct design behaviors. To make manipulating content more fluid, we introduce selection scraps and posthoc scrap creation, make scrap interactions more discoverable through bubble menus, and introduce text and list scraps. To support more effectively working with and navigating between perspectives, alternatives, and abstractions while performing mental simulations, juxtaposing, reviewing progress, and retreating to past ideas, we introduce the cluster view. To support more effectively collaborating with sketches, we enable synchronous and asynchronous collaboration across multiple devices and introduce the fading highlighter to help designers explain sketches. In the following sections we describe the features of Calico in detail. **Sketching** As in a physical whiteboard, the most prominent feature of Calico is an open canvas, allowing designers to immediately create a stroke simply by dragging their pen. Designers can select pen color, stroke width, and pen modes and may erase strokes, undo, and redo. A central benefit of an interactive whiteboard is the interactivity it affords — the ability to move, copy, rotate, and resize. Drawing tools often enable this through modes, allowing users to toggle between drawing and selection modes. However, modes distract from the fluidity a whiteboard provides — designers can no longer stay focused on the design task at hand and must instead maintain awareness of and actively switch between modes. To minimize this distraction, Calico provides a lightweight selection and manipulation mechanism, allowing designers to select regions of content by circumscription, creating a selection scrap (Figure 3b). When a stroke is sufficiently long, a landing zone appears (Figure 3a); ending the stroke inside creates a selection scrap. Calico also enables scraps to be created from existing strokes, either to recover if the user has missed the landing zone or to promote previously created content into a scrap. Pressing-and-holding the pen inside a stroke that circumscribes an area triggers a dotted red circle to appear, which can be tapped to create a scrap. Scraps are inspired by Translucent Patches [20], which allows users to explicitly declare an area as a group. Scraps are movable, copy-able, deletable, rotatable, and resizable, using the bubble menu surrounding the scrap (Figure 3b). When a selection scrap loses focus, it immediately disappears and returns its content to the canvas, providing interactivity benefits without forcing content to be a persistent object. To permanently retain the scrap, users may tap either of the two scrap icons in the upper left of the bubble menu to transform it into a regular scrap (indicated with a blue background – Figure 3c), either retaining the original shape or creating a neater rectangle. Once made a regular scrap, a scrap becomes a group that is manipulatable (as described before), stackable, and connectable. For example, the ATM scrap in Figure 3d was first drawn on the canvas, then circumscribed by the stylus to create a regular scrap. Moving a scrap to a position where it is entirely overlapped by another scrap attaches it to the scrap behind it, allowing users to quickly create a stack (thereby creating hierarchically composed groups), as one would a pile of papers. Continuing the example, the Deposit, Withdrawal, and CheckBalance scraps are stacked on the Transactions scrap; moving “Transactions” moves the entire stack. Dragging a scrap off a stack ungroups it. For example, moving the scrap labeled “Deposit” from its current location to “User Interface” re-parents it to the new scrap. Scraps do not slide under other scraps; dragging a scrap implicitly moves it to the front. Dragging the pen between scraps highlights the pen stroke, presenting the user with an option to transform the stroke into a connector, through an ignorable button. As with scraps, this can also be done retroactively by press-and-holding a stroke that connects scraps. Connectors preserve the shape of the stroke, but are decorated with an arrowhead. Connectors are persistent and anchored to scraps: moving a scrap resizes the connector. List scraps enable users to organize a stack into a vertical list, whose boundaries are automatically updated (Figure 4). Promoting a stack into a list organizes the immediate children of the parent scrap into a vertical list. As with the implicit grouping of regular scraps, dragging a scrap onto a list adds it, refreshing the automatic layout. List items also gain an associated box that can be checked and unchecked. Lists can be nested to create multi-level hierarchies. Text scraps enable users to create typed content quickly from the keyboard, simply by pressing the enter key and ![Figure 3. Scraps allow users to manipulate content.](image) ![Figure 4. List scraps organize scraps into a vertical list.](image) ![Figure 5. Clicking on the palette icon (a) on a scrap's bubble menu adds it to the palette bar (b).](image) typing. If a list scrap is selected, the text scrap is appended. Calico also enables scraps to be created from images. Calico provides a palette, allowing designers to save a scrap for reuse (Figure 5). Dragging a scrap from the palette to the canvas creates a copy of the scrap. The palette is global to all canvases and users, enabling scraps to be shared. Navigating Sketches Calico allows designers to create and work with multiple canvases. While working in a canvas, tapping “new canvas” or “copy canvas” navigates to the new canvas and allows sketching to continue. Calico also provides a history stack with buttons to navigate forwards and backwards. Designers may choose to name their canvas with a title. As designers create many canvases, the set of canvases may become unwieldy. To organize canvases, Calico provides a three level hierarchy: the wall, clusters, and canvases. The wall provides a zoomable, high-level grid view of clusters, allowing designers to move between separate spaces for a project or person (Figure 6). Dragging a canvas between clusters moves it, allowing users to create new clusters and automatically deleting empty clusters. Tapping a cluster invokes the cluster view (Figure 7) providing a zoomable overview of a group of canvases. Clusters automatically arrange canvases into a radial layout, ordering canvases along concentric circles. In preliminary testing, users reported that clusters provided a meta-design... space and wished to organize canvases as part of their design process. Calico thus allows canvases to be manually repositioned, pinning their location. Calico enables users to construct a narrative describing the relationships between canvases through tagging. When a new canvas is created, users are prompted to tag the canvas with its relationship to the previously visited canvas (Figure 8). The tag panel is populated with a set of tags drawn from ways in which designers have been found to relate sketches, including different alternatives, perspectives, and abstractions. The user, however, may add, edit, or delete types of tags. After choosing a tag, the new canvas is linked to the previous canvas in the cluster view, with a label denoting the tag (left side of Figure 7). Repeatedly creating and linking canvases forms a graph structure in the radial layout. Calico also helps users to find canvases. Navigation history is recorded, and the most recently visited canvas is highlighted with a blue halo in the cluster view (left side of Figure 7). The breadcrumb bar at the top of the canvas and cluster views (Figure 9) let designers directly navigate to any canvas within the hierarchy. Collaborating with Sketches Calico supports collaborative work across multiple devices, allowing multiple designers to work synchronously on the same canvas or asynchronously on different canvases. This allows designers working in a group to branch off to their own canvas, preventing designers from “spin[ning] their wheels” while others have the floor [55]. Calico allows user to copy or create a new canvas, work asynchronously, and later invite others to visit the new canvases. Canvases can also be shared by email or by generating a unique URL. A fading highlighter allows users to draw temporary marks immediately visible to all users currently viewing a canvas. Marks disappear after 4 seconds. This enables designers to annotate sketches during mental simulations, reviews of progress, and explanations, particularly when working in a group with multiple devices or distributed across locations. Implementation Calico is implemented as a Java application, spanning approximately 100,000 lines of code and built on the Piccolo UI toolkit for zoomable interfaces [2]. Calico uses a client-server architecture, supporting up to 20 simultaneously active users. The Calico client is portable, supporting computers connected to electronic whiteboards, laptops, and tablets. Calico is open source and freely available. EVALUATION To evaluate Calico and explore the opportunities and challenges in supporting informal design with interactive whiteboards, we conducted a field deployment of Calico. Method We deployed Calico to three groups. In the research group (which included an author not associated with Calico at the time), three researchers designing a software development IDE used Calico for over a year, seven months of which was included in the study period. The group was geographically distributed across two sites, but also made extensive use of Calico during a one-week collocated period. In the interaction group, two designers at an interaction design firm used Calico over a five-day period. The interaction group used a version of Calico for most of the study period that did not contain the cluster view (including only a two level hierarchy with a grid and canvases); we thus do not report on their use of the cluster view. In the OSS group, five software developers at a healthcare open source software company used Calico for a four-week period. The research, interaction, and OSS groups were setup with two Hitachi Starboard FXDUO77 whiteboards (adjacent in a room), one Hitachi Starboard FXDUO88, and one Hitachi Starboard FX, respectively. Each group also had access to a traditional physical whiteboard, pen-based tablets, and a server instance of Calico. During the study period, we collected usage logs of Calico, recording the complete history of designers’ interactions with Calico. To analyze this data, we first used the logs to probe designers’ use of Calico, examining instances both where usage was aligned with the design behaviors and which indicated 1 https://github.com/uci-sdcl/Calico another intention. After the study period was concluded, we conducted semi-structured interviews with designers in each group, focusing on memorable design experiences with Calico, explanations of interesting behavior observed in the usage logs, obstacles or surprises designers perceived in their use of Calico, how they felt Calico impacted their design process, and perceptions of Calico’s features. Results Designers made extensive use of Calico (Table 1), with the research, interaction, and OSS groups creating a total of 79, 20, and 40 canvases, respectively. Given the choice between Calico and their traditional physical whiteboards, the interaction designers exclusively used Calico while the research and OSS groups used both, more due to ease-of-access in the moment than due to a preference of use for specific tasks. While designers used Calico over much of the study periods, use was highly concentrated in bursts of activity around meetings, where designers prepared sketches the day before, used Calico intensely during meetings, and reviewed sketches following the meeting. While much of Calico’s value came from sketching in the moment, all groups emailed images of canvases to archive their sketches. The interaction and OSS groups did not arrange canvases into separate personal spaces; the research group, which used Calico over the longest period, did. In the following sections, we examine Calico’s effect on each of the design behaviors, challenges designers experienced using Calico, and designers’ overall impression of Calico. Sketching Designers draw different kinds of diagrams. Designers used Calico to create a wide variety of sketches – box and arrow diagrams, UI mockups, lists, tables, use case diagrams, source code, plots, dendrograms, flowcharts, storyboards, and timelines (Figures 1, 10, 11). Designers made use of scraps to organize and arrange content. For example, the interaction group created image scraps of people they had interviewed, organized them along themes, and drew on the diagrams to capture these ideas (Figure 10e, f). In the OSS group, designers used scraps to create box-and-arrow diagrams and user interface mockups while brainstorming the elements and appearance of a GUI (Figure 10g). They reported that depicting elements as scraps made them easier to move and resize, making them feel more like entities. Designers draw what they need, and no more. Inter- <table> <thead> <tr> <th>Feature</th> <th>Total use</th> <th>Days used</th> <th>Median use per used day</th> <th>Max use per used day</th> </tr> </thead> <tbody> <tr> <td></td> <td>OSS group</td> <td>Rsrch group</td> <td>OSS group</td> <td>Rsrch group</td> </tr> <tr> <td>Strokes</td> <td>6256</td> <td>23915</td> <td>45%</td> <td>58%</td> </tr> <tr> <td>Scraps</td> <td>1636</td> <td>14178</td> <td>29%</td> <td>31%</td> </tr> <tr> <td>Palette</td> <td>41</td> <td>360</td> <td>6%</td> <td>10%</td> </tr> <tr> <td>Fading highlighter</td> <td>513</td> <td>212</td> <td>19%</td> <td>5%</td> </tr> <tr> <td>Cluster view</td> <td>1374</td> <td>10067</td> <td>42%</td> <td>53%</td> </tr> <tr> <td>Overall</td> <td>10093</td> <td>60892</td> <td>45%</td> <td>54%</td> </tr> </tbody> </table> Table 1. Use of Calico by the OSS and research groups (data for the interaction group is not available; data for the research group includes the first 6.5 months of the 7 month study period). Each unit of activity corresponds to a user action (e.g., drawing a stroke, resizing a scrap, switching between canvases). The overall row includes all interactions with Calico (including activity types not listed). viewed about their sketches, there was often a large disparity between designers’ mental models of what they designed and what the sketches explicitly captured. While designers from the OSS and research groups had difficulty identifying the meaning of some sketches, they recalled the overall objective, which they considered more important than the details. These sketches were used to support activity while “in the moment”. For example, the OSS group expressed most of their software architectures using only boxes and arrows (Figure 10(a, c, d)), only rarely labeling the connecting arrows. Most design occurred verbally, and designers only added the detail required to have something to point at during discussion. The OSS group made extensive use of the fading highlighter, permitting discussing and tracing paths over diagrams while preserving their low detail. Designers varied in the level of detail they used. When drawing similar sketches, designers used inconsistent levels of detail. For example, the interaction group sometimes labeled the axes of plots in detail and other times in very little detail. In other situations, designers created elaborate sketches that visually encoded a wide range of information. A participant in the research group reported that scraps and connectors led them to create more complex sketches, helping them address a deeper level of complexity. **Designers refine and evolve sketches.** Designers sometimes began sketches simply, evolving them over time to more complex sketches. For example, the OSS group first created a sketch containing only handwritten names. It then evolved, as the sketched names became text scraps and connectors were added (Figure 10a). The interaction designers often began with pictures of faces, which they then categorized using visual structures. In one example (Figure 10e), they began with a single dimensional line, added categories to the line, and transformed it into a table. While they did not set out to create a table, their design process ultimately led them to create it. Scraps played an important role in this process, helping designers to organize and manipulate content as it evolved. However, designers did not make all content into regular scraps. Designers rarely made complex, handwritten structures such as plots regular scraps, as scraps were a poor fit for these structures. **Designers use impromptu notations.** All groups created visual languages in their designs, encoding their own meaning into notations. For example, designers circled scraps (Figure 10e); used color coded lines, underlines (Figure 10g), and boxes (Figure 10c); and dashed lines (Figure 11). The meaning of the notations was often not obvious and sometimes forgotten. A designer in the OSS group reported that he could not recall the meaning afterwards, but felt that it had supported his thinking during design. Designers sometimes used the palette to record notations that could not be quickly sketched. For example, the interaction designers saved and reused images of people, and the OSS group identified and reused “important entities”. **Navigating Sketches** **Designers work with different perspectives.** All groups shifted their focus among multiple canvases representing different perspectives on their design. For example, the interaction designers shifted focus between canvases categorizing their data using different visual structures (e.g., tables, one and two dimensional plots; Figure 10(e, f)). All three groups found copying canvases useful, enabling, for example, the interaction designers to use a template canvas to rapidly create new canvases to explore new perspectives on their data. The OSS group made frequent use of the cluster view to move between perspectives. When working with canvases, they created chains, providing an order that helped convey a story. This sometimes directly reflected the chronology of their exploration in the design space, while in other cases, designers inserted canvases when they returned to previous sketches and deviated to a new idea. **Designers work with alternatives.** Designers in the OSS and research groups used multiple canvases to explore multiple alternatives. In the OSS group, the alternatives were often generated as a result of conflicting opinions during discussion, inspiring a designer to copy a canvas and generate their own interpretation. In contrast, the interaction designers did not use separate canvases to explore alternatives. Unlike the other groups, the alternatives they considered were of different organizations and interpretations of data, which led them to negotiate alternatives verbally rather than through sketching. Finally, one designer reported that not being limited to a single space on a physical whiteboard meant that “more random ideas get thrown on there,” increasing the number of alternatives they sketched. Designers used Calico’s tagging feature to label canvases as alternatives. A designer felt that maintaining past alternatives, even when ultimately rejected, was beneficial in providing a record of their design process. **Designers work with sketches at different levels of abstraction.** Designers used Calico to work with sketches at varying levels of abstraction, moving both to more and less abstract canvases. For example, the OSS group dove into the behavior of components, copying canvases, and creating new canvases at a lower abstraction level. Designers first started with a more abstract sketch of an event bus connected to event listeners (Figure 10c) before considering the design of a specific “alert” event listener (Figure 10d). All groups used lists – either handwritten or list scraps – to summarize the contents of other canvases, which they re- **Figure 11.** An example of fading highlighter use by the OSS group (left) and a composite of 10 min (of 30) of use (right). ferred back to while designing. **Designers perform mental simulations.** All groups reported that they mentally stepped through their sketches, both verbally in groups and on their own. To do so, the OSS group made heavy use of the fading highlighter. Displaying architectural sketches on the large electronic whiteboard during a meeting, they discussed a sketch at length, gesturing at components with their hands and using the fading highlighter from a tablet that was remotely connected to the same canvas. In one instance, they discussed a single sketch for 30 minutes using the highlighter (Figure 11). **Designers juxtapose sketches.** All groups juxtaposed sketches, either navigating back and forth between sketches or copying dispersed content onto a single canvas using the palette. For example, a designer in the research group copied pieces of a process flow and used an adjacent table to step through the diagram (Figure 1). In some cases, juxtaposed sketches served as a static reference in creating a new sketch; in other cases, designers evolved both in parallel. **Designers review their progress.** All groups reported that they reviewed their progress. Most used lists (most often handwritten or as text scraps) to summarize aspects of their design, which they sometimes referenced and updated. Designers also reviewed their progress by rapidly moving back and forth between several canvases or by using the cluster view for an overview. While not sufficiently detailed to examine canvas content, the cluster view anchored discussion and allow designers to gesture at canvases, with the linkages between canvases helping designers to recall “how the session played-out”. **Designers retreat to previous ideas.** Only designers in the multi-week, long-term design sessions (the OSS and research groups) retreated to previous ideas, reporting that they did not return to previous ideas until a later design session, at which point Calico helped to refresh their memory of their past approaches. Both reported that, since they did not feel a need to delete unused sketches, they returned to old sketches more often. The graph structure provided by the cluster view helped designers to locate old sessions and remember their meaning, with linked canvases assisting in reconstructing meaning. A designer in the research group reported: > “Designs get very complex... you want to keep a history of what you’ve done, the branches that you’ve pruned... If you’re designing a complex thing with stages and you’re trying to tell a story, you can say: okay we’ve tried that... If you don’t have the structure you’ll have to create it somewhere else. [You save time] if it’s already here...” **Collaborating with Sketches** **Designers switch between synchronous and asynchronous work.** Designers in the research and OSS groups used Calico across multiple devices. For the OSS group, this led to a more informal setting in which members spontaneously broke into small groups in meetings, handing tablets back-and-forth, sketching over the diagrams, and displaying their annotations on the electronic whiteboard. With multiple tablets, multiple team members could talk simultaneously without a single arbiter at the whiteboard blocking content production, an issue in whiteboard use [30]. In the distributed research group, this enabled remote participants to be more active by sketching ideas. In contrast, the interaction designers were collocated and had an established culture of working in pairs, leading them not to break into groups. The OSS group reported working asynchronously at least once every session and felt that it was an important benefit: > “The fact that someone can work with their own tablet or computer... is something really powerful... Especially when someone is already at the whiteboard discussing something and you want to bring in an alternative perspective but you need to wait until they’re done.” **Designers bring their work together.** Designers rarely did this, as the interaction group did not work asynchronously and the research group did not combine their work. However, the OSS group twice combined work produced asynchronously, creating a new canvas, linking it to the previous canvases with tags, and summarizing their work. **Designers explain their sketches to others.** All groups explained their sketches to one another but varied in the situations in which they did so. The interaction designers worked exclusively synchronously, explaining designs only when a designer challenged decisions. The OSS group sometimes worked asynchronously and used explanations when returning to synchronous work. The research group worked more independently and explained days of work to other team members. In most cases, designers explained their sketches by pointing, gesturing in the air, or simply verbally, with the fading highlighter sometimes assisting. **Challenges using Calico** Our study revealed a number of weaknesses in Calico, ranging from usability issues to challenges inherent to interactive whiteboards. The interaction designers reported that rapidly rearranging many scraps was not well supported, as the gesture of moving scraps (click and hold) could be slow. Due to the cluster view’s layout approach, it often zoomed out far to show all canvases, making it difficult or impossible to read the content on individual canvases. This made juxtaposing sketches more challenging, forcing designers to explicitly copy canvases using the palette or to rapidly jump between canvases. It also made simply navigating between canvases using the cluster view more challenging. Designers also wanted the ability to more easily augment the set of tags, to for example, declare which alternative was chosen. While the fading highlighter played an important role in several situations, designers often felt that they forgot to use it “in the heat of the moment”. Moreover, it was sometimes Empirical Result | Design Recommendation --- | --- Designers simulate and discuss scenarios very frequently. | Enable annotating sketches with multiple scenarios. Interactive whiteboards diminish handwriting quality. | Enable alternative text input (e.g., speech to text or text recognition) Designers work simultaneously with several canvases. | Enable multiple canvases to be legibly viewed simultaneously. While separating sketches across canvases has important benefits, multiple canvases are sometimes parts of a single sketch. | Enable designers to expand canvases when necessary. Designers use impromptu notations whose meaning is forgotten when sketches are reviewed. | Enable designers to reconstruct meaning by recording and replaying audio from design sessions. Determining the authorship of content is challenging. | Provide authorship cues as content is created. Designers work synchronously and asynchronously, moving together between canvases and working on separate canvases. | Enable designers to temporarily subscribe to a group focus. Table 2. Empirical results on informal design with interactive whiteboards and recommendations for design. confusing which designer was drawing – designers wished to see a name associated with highlights. While the cluster view depicted the current canvas of each device, the designers still felt slowed down when moving between canvases with multiple participants, requiring that they announce what canvas they were moving to. Nearly all groups reported that the large electronic whiteboards diminished the quality of their handwriting, forcing them to write slower or larger, write with a tablet, or enter text using a keyboard. The interaction designers found the space available too small, reporting that they were “blocked by the physical limitations of the [electronic] board.” Overall Impressions The research group and OSS group both felt that, on balance, the benefits of using Calico outweighed its difficulties and wished to continue to use Calico in the future. The research group felt that Calico helped support their meetings. Prior to using Calico, the group used physical whiteboards and emailed picture of the whiteboard to the remotely located team member. They preferred Calico over a formal diagramming tool as they wished to maintain informality and the ability to freely sketch. The OSS group reported that they did not feel any loss of expressive control in using Calico in comparison to the whiteboard, and reported that they normally would have performed many of the same activities on physical whiteboards in their meeting spaces. The interaction designers reported that they would not continue to use Calico, as it did not match their needs. They wished to have infinitely sized canvases – which Calico did not provide – and felt trapped by the limited space. Further, performance was slowed by using a large number of images on a single Canvas, making Calico less responsive. DISCUSSION Through a review of the software design literature, we identified 14 behaviors that characterize informal design at the whiteboard and designed an interactive whiteboard system – Calico – to support these behaviors. Through a deployment of Calico to three groups of designers, we examined how supporting these behaviors impacts the practice of informal design. We found that, by supporting these behaviors, interactive whiteboards can help designers to more effectively manipulate content, work with groups and relationships amongst sketches, and collaboratively design synchronously and asynchronously. Our field deployment revealed several challenges in supporting informal design, suggesting several design recommendations beyond supporting the design behaviors (Table 2). For example, designers constantly use general-purpose sketches to simulate and discuss scenarios, annotating and tracing paths over sketches. This might be more effectively supported by allowing designers to use and reference multiple scenarios on top of general-purpose sketches. As another example, diminished handwriting quality remains an important issue, suggesting the need to consider alternative mechanisms for text entry such as speech to text. Together, the design behaviors and design recommendations provide guidance on how informal design can be effectively supported with interactive whiteboards. ACKNOWLEDGEMENTS We thank all of the designers who participated in our study. This research was funded in part by the National Science Foundation under grants CCF-1118052 and IIS-1111446. REFERENCES
{"Source-Url": "http://cs.gmu.edu/~tlatoza/papers/chi14-mangano.pdf", "len_cl100k_base": 8708, "olmocr-version": "0.1.49", "pdf-total-pages": 10, "total-fallback-pages": 0, "total-input-tokens": 45192, "total-output-tokens": 11376, "length": "2e13", "weborganizer": {"__label__adult": 0.0011644363403320312, "__label__art_design": 0.1044921875, "__label__crime_law": 0.0006604194641113281, "__label__education_jobs": 0.036102294921875, "__label__entertainment": 0.0006814002990722656, "__label__fashion_beauty": 0.000804901123046875, "__label__finance_business": 0.0009765625, "__label__food_dining": 0.000934600830078125, "__label__games": 0.0023593902587890625, "__label__hardware": 0.003223419189453125, "__label__health": 0.0010881423950195312, "__label__history": 0.001308441162109375, "__label__home_hobbies": 0.0005249977111816406, "__label__industrial": 0.0010309219360351562, "__label__literature": 0.0019521713256835935, "__label__politics": 0.00037980079650878906, "__label__religion": 0.0015316009521484375, "__label__science_tech": 0.090087890625, "__label__social_life": 0.0004444122314453125, "__label__software": 0.0748291015625, "__label__software_dev": 0.67333984375, "__label__sports_fitness": 0.0005502700805664062, "__label__transportation": 0.0010776519775390625, "__label__travel": 0.0005221366882324219}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 50120, 0.02545]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 50120, 0.60276]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 50120, 0.92287]], "google_gemma-3-12b-it_contains_pii": [[0, 4089, false], [4089, 9346, null], [9346, 15148, null], [15148, 19966, null], [19966, 24202, null], [24202, 28088, null], [28088, 33979, null], [33979, 39930, null], [39930, 45268, null], [45268, 50120, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4089, true], [4089, 9346, null], [9346, 15148, null], [15148, 19966, null], [19966, 24202, null], [24202, 28088, null], [28088, 33979, null], [33979, 39930, null], [39930, 45268, null], [45268, 50120, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 50120, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 50120, null]], "pdf_page_numbers": [[0, 4089, 1], [4089, 9346, 2], [9346, 15148, 3], [15148, 19966, 4], [19966, 24202, 5], [24202, 28088, 6], [28088, 33979, 7], [33979, 39930, 8], [39930, 45268, 9], [45268, 50120, 10]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 50120, 0.04972]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
42cd030c92a031988eacad6ee2ba1e41faf3bb23
[REMOVED]
{"len_cl100k_base": 14994, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 61932, "total-output-tokens": 20069, "length": "2e13", "weborganizer": {"__label__adult": 0.0003819465637207031, "__label__art_design": 0.0003294944763183594, "__label__crime_law": 0.0002887248992919922, "__label__education_jobs": 0.0007085800170898438, "__label__entertainment": 6.008148193359375e-05, "__label__fashion_beauty": 0.00015413761138916016, "__label__finance_business": 0.0001666545867919922, "__label__food_dining": 0.0002579689025878906, "__label__games": 0.0006384849548339844, "__label__hardware": 0.0006442070007324219, "__label__health": 0.00030994415283203125, "__label__history": 0.0001571178436279297, "__label__home_hobbies": 7.49826431274414e-05, "__label__industrial": 0.00022041797637939453, "__label__literature": 0.00019741058349609375, "__label__politics": 0.0001951456069946289, "__label__religion": 0.00034737586975097656, "__label__science_tech": 0.005535125732421875, "__label__social_life": 8.505582809448242e-05, "__label__software": 0.0050506591796875, "__label__software_dev": 0.9833984375, "__label__sports_fitness": 0.00022614002227783203, "__label__transportation": 0.00032210350036621094, "__label__travel": 0.00016355514526367188}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 79691, 0.04936]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 79691, 0.19785]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 79691, 0.87501]], "google_gemma-3-12b-it_contains_pii": [[0, 4081, false], [4081, 7796, null], [7796, 11832, null], [11832, 15610, null], [15610, 18893, null], [18893, 21854, null], [21854, 24889, null], [24889, 29202, null], [29202, 33065, null], [33065, 36667, null], [36667, 40293, null], [40293, 43980, null], [43980, 48644, null], [48644, 52983, null], [52983, 56742, null], [56742, 60533, null], [60533, 63789, null], [63789, 67575, null], [67575, 69419, null], [69419, 75728, null], [75728, 75728, null], [75728, 79691, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4081, true], [4081, 7796, null], [7796, 11832, null], [11832, 15610, null], [15610, 18893, null], [18893, 21854, null], [21854, 24889, null], [24889, 29202, null], [29202, 33065, null], [33065, 36667, null], [36667, 40293, null], [40293, 43980, null], [43980, 48644, null], [48644, 52983, null], [52983, 56742, null], [56742, 60533, null], [60533, 63789, null], [63789, 67575, null], [67575, 69419, null], [69419, 75728, null], [75728, 75728, null], [75728, 79691, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 79691, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 79691, null]], "pdf_page_numbers": [[0, 4081, 1], [4081, 7796, 2], [7796, 11832, 3], [11832, 15610, 4], [15610, 18893, 5], [18893, 21854, 6], [21854, 24889, 7], [24889, 29202, 8], [29202, 33065, 9], [33065, 36667, 10], [36667, 40293, 11], [40293, 43980, 12], [43980, 48644, 13], [48644, 52983, 14], [52983, 56742, 15], [56742, 60533, 16], [60533, 63789, 17], [63789, 67575, 18], [67575, 69419, 19], [69419, 75728, 20], [75728, 75728, 21], [75728, 79691, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 79691, 0.14079]]}
olmocr_science_pdfs
2024-11-27
2024-11-27
89d778609076b6fb2fbeac5f620fad742c026b06
Standard Description: This summary covers roughly the same material as lecture and section. It can help to read about the material in a narrative style and to have the material for an entire unit of the course in a single document, especially when reviewing the material later. Please report errors in these notes, even typos. This summary is not a sufficient substitute for attending class, reading the associated code, etc. Contents Ruby Logistics The course website provides installation and basic usage instructions for Ruby and its REPL (called irb), so that information is not repeated here. Note that for consistency we will require Ruby version 2.x.y (for any x and y), although this is for homework purposes – the concepts we will discuss do not depend on an exact version, naturally. There is a great amount of free documentation for Ruby at http://ruby-doc.org and http://www.ruby-lang.org/en/documentation/. We also recommend, Programming Ruby 1.9 & 2.0, The Pragmatic Programmers’ Guide although this book is not free. Because the online documentation is excellent, the other course materials may not describe in detail every language feature used in the lectures and homeworks although it is also not our goal to make you hunt for things on purpose. In general, learning new language features and libraries is an important skill after some initial background to point you in the right direction. Ruby Features Most Interesting for a PL Course Ruby is a large, modern programming language with various features that make it popular. Some of these features are useful for a course on programming-language features and semantics, whereas others are not useful for our purposes even though they may be very useful in day-to-day programming. Our focus will be on object-oriented programming, dynamic typing, blocks (which are almost closures), and mixins. We briefly describe these features and some other things that distinguish Ruby here — if you have not seen an object-oriented programming language, then some of this overview will not make sense until after learning more Ruby. - Ruby is a pure object-oriented language, which means all values in the language are objects. In Java, as an example, some values that are not objects are null, 13, true, and 4.0. In Ruby, every expression evaluates to an object. - Ruby is class-based: Every object is an instance of a class. An object’s class determines what methods an object has. (All code is in methods, which are like functions in the sense that they take arguments and return results.) You call a method “on” an object, e.g., obj.m(3,4) evaluates the variable obj to an object and calls its m method with arguments 3 and 4. Not all object-oriented languages are class-based; see, for example, JavaScript. - Ruby has mixins: The next course-unit will describe mixins, which strike a reasonable compromise between multiple inheritance (like in C++) and interfaces (like in Java). Every Ruby class has one superclass, but it can include any number of mixins, which, unlike interfaces, can define methods (not just require their existence). • Ruby is dynamically typed: Just as Racket allowed calling any function with any argument, Ruby allows calling any method on any object with any arguments. If the receiver (the object on which we call the method) does not define the method, we get a dynamic error. • Ruby has many dynamic features: In addition to dynamic typing, Ruby allows instance variables (called fields in many object-oriented languages) to be added and removed from objects and it allows methods to be added and removed from classes while a program executes. • Ruby has convenient reflection: Various built-in methods make it easy to discover at run-time properties about objects. As examples, every object has a method class that returns the object’s class, and a method methods that returns an array of the object’s methods. • Ruby has blocks and closures: Blocks are almost like closures and are used throughout Ruby libraries for convenient higher-order programming. Indeed, it is rare in Ruby to use an explicit loop since collection classes like Array define so many useful iterators. Ruby also has fully-powerful closures for when you need them. • Ruby is a scripting language: There is no precise definition of what makes a language a scripting language. It means the language is engineered toward making it easy to write short programs, providing convenient access to manipulating files and strings (topics we will not discuss), and having less concern for performance. Like many scripting languages, Ruby does not require that you declare variables before using them and there are often many ways to say the same thing. • Ruby is popular for web applications: The Ruby on Rails framework is a popular choice for developing the server side of modern web-sites. Recall that, taken together, ML, Racket, and Ruby cover three of the four combinations of functional vs. object-oriented and statically vs. dynamically typed. Our focus will be on Ruby’s object-oriented nature, not on its benefits as a scripting language. We also will not discuss at all its support for building web applications, which is a main reason it is currently so popular. As an object-oriented language, Ruby shares much with Smalltalk, a language that has basically not changed since 1980. Ruby does have some nice additions, such as mixins. Ruby is also a large language with a “why not” attitude, especially with regard to syntax. ML and Racket (and Smalltalk) adhere rather strictly to certain traditional programming-language principles, such as defining a small language with powerful features that programmers can then use to build large libraries. Ruby often takes the opposite view. For example, there are many different ways to write an if-expression. The Rules of Class-Based OOP Before learning the syntax and semantics of particular Ruby constructs, it is helpful to enumerate the “rules” that describe languages like Ruby and Smalltalk. Everything in Ruby is described in terms of object-oriented programming, which we abbreviate OOP, as follows: 1. All values (as usual, the result of evaluating expressions) are references to objects. 2. Given an object, code “communicates with it” by calling its methods. A synonym for calling a method is sending a message. (In processing such a message, an object is likely to send other messages to other objects, leading to arbitrarily sophisticated computations.) 3. Each object has its own private state. Only an object’s methods can directly access or update this state. 4. Every object is an instance of a class. 5. An object’s class determines the object’s behavior. The class contains method definitions that dictate how an object handles method calls it receives. While these rules are mostly true in other OOP languages like Java or C#, Ruby makes a more complete commitment to them. For example, in Java and C#, some values like numbers are not objects (violating rule 1) and there are ways to make object state publicly visible (violating rule 3). Objects, Classes, Methods, Variables, Etc. (See also the example programs posted with the lecture materials, not all of which are repeated here.) Class and method definitions Since every object has a class, we need to define classes and then create instances of them (an object of class C is an instance of C). (Ruby also predefines many classes in its language and standard library.) The basic syntax (we will add features as we go) for creating a class Foo with methods m1, m2, ... mn can be: class Foo def m1 ... end def m2 (x,y) ... end ... def mn z ... end end Class names must be capitalized. They include method definitions. A method can take any number of arguments, including 0, and we have a variable for each argument. In the example above, m1 takes 0 arguments, m2 takes two arguments, and mn takes 1 argument. Not shown here are method bodies. Like ML and Racket functions, a method implicitly returns its last expression. Like Java/C#/C++, you can use an explicit return statement to return immediately when helpful. (It is bad style to have a return at the end of your method since it can be implicit there.) Method arguments can have defaults in which case a caller can pass fewer actual arguments and the remaining ones are filled in with defaults. If a method argument has a default, then all arguments to its right must also have a default. An example is: def myMethod (x,y,z=0,w="hi") ... end Calling methods The method call `e0.m(e1, ..., en)` evaluates `e0`, `e1`, ..., `en` to objects. It then calls the method `m` in the result of `e0` (as determined by the class of the result of `e0`), passing the results of `e1`, ..., `en` as arguments. As for syntax, the parentheses are optional. In particular, a zero-argument call is usually written `e0.m`, though `e0.m()` also works. To call another method on the same object as the currently executing method, you can write `self.m(...)` or just `m(...)`. (Java/C#/C++ work the same way except they use the keyword `this` instead of `self`.) In OOP, another common name for a method call is a message send. So we can say `e0.m e1` sends the result of `e0` the message `m` with the argument that is the result of `e1`. This terminology is “more object-oriented” — as a client, we do not care how the receiver (of the message) is implemented (e.g., with a method named `m`) as long as it can handle the message. As general terminology, in the call `e0.m args`, we call the result of evaluating `e0` the receiver (the object receiving the message). **Instance variables** An object has a class, which defines its methods. It also has instance variables, which hold values (i.e., objects). Many languages (e.g., Java) use the term fields instead of instance variables for the same concept. Unlike Java/C#/.NET, our class definition does not indicate what instance variables an instance of the class will have. To add an instance variable to an object, you just assign to it: if the instance variable does not already exist, it is created. All instance variables start with an `@`, e.g., `@foo`, to distinguish them from variables local to a method. Each object has its own instance variables. Instance variables are mutable. An expression (in a method body) can read an instance variable with an expression like `@foo` and write an instance variable with an expression `@foo = newValue`. Instance variables are private to an object. There is no way to directly access an instance variable of any other object. So `@foo` refers to the `@foo` instance variable of the current object, i.e., `self.@foo` except `self.@foo` is not actually legal syntax. Ruby also has class variables (which are like Java’s static fields). They are written `@@foo`. Class variables are not private to an object. Rather, they are shared by all instances of the class, but are still not directly accessible from objects of different classes. **Constructing an object** To create a new instance of class `Foo`, you write `Foo.new(...)` where `(...)` holds some number of arguments (where, as with all method calls, the parentheses are optional and when there are zero or one arguments it is preferred to omit them). The call to `Foo.new` will create a new instance of `Foo` and then, before `Foo.new` returns, call the new object’s `initialize` method with all the arguments passed to `Foo.new`. That is, the method `initialize` is special and serves the same role as constructors in other object-oriented languages. Typical behavior for `initialize` is to create and initialize instance variables. In fact, the normal approach is for `initialize` always to create the same instance variables and for no other methods in the class to create instance variables. But Ruby does not require this and it may be useful on occasion to violate these conventions. Therefore, different instances of a class can have different instance variables. **Expressions and Local Variables** Most expressions in Ruby are actually method calls. Even `e1 + e2` is just syntactic sugar for `e1.+ e2`, i.e., call the `+` method on the result of `e1` with the result of `e2`. Another example is `puts e`, which prints the result of `e` (after calling its `to_s` method to convert it to a string) and then a newline. It turns out `puts` is a method in all objects (it is defined in class `Object` and all classes are subclasses of `Object` — we discuss subclasses later), so `puts e` is just `self.puts e`. Not every expression is a method call. The most common other expression is some form of conditional. There are various ways to write conditionals; see the example code posted with the lecture materials. As discussed below, loop expressions are rare in Ruby code. Like instance variables, variables local to a method do not have to be declared: The first time you assign to \texttt{x} in a method will create the variable. The scope of the variable is the entire method body. It is a run-time error to use a local variable that has not yet been defined. (In contrast, it is not a run-time error to use an instance variable that has not yet been defined. Instead you get back the \texttt{nil} object, which is discussed more below.) \textit{Class Constants and Class Methods} A class constant is a lot like a class variable (see above) except that (1) it starts with a capital letter instead of \texttt{@@}, (2) you should not mutate it, and (3) it is publicly visible. Outside of an instance of class \texttt{C}, you can access a constant \texttt{Foo} of \texttt{C} with the syntax \texttt{C::Foo}. An example is \texttt{Math::PI}.\footnote{Actually, \texttt{Math} is a module, not a class, so this is not technically an example, but modules can also have constants.} A class method is like an ordinary method (called an instance method to distinguish from class methods) except (1) it does not have access to any of the instance variables or instance methods of an instance of the class and (2) you can call it from outside the class \texttt{C} where it is defined with \texttt{C.method_name args}. There are various ways to define a class method; the most common is the somewhat hard-to-justify syntax: \begin{verbatim} def self.method_name args ... end \end{verbatim} Class methods are called static methods in Java and C#. \textbf{Visibility and Getters/Setters} As mentioned above, instance variables are private to an object: only method calls with \textit{that object} as the receiver can read or write the fields. As a result, the syntax is \texttt{@foo} and the self-object is implied. Notice even other instances of the same class cannot access the instance variables. This is quite object-oriented: you can interact with another object only by sending it messages. Methods can have different \textit{visibilities}. The default is \texttt{public}, which means any object can call the method. There is also \texttt{private}, which, like with instance variables, allows only the object itself to call the method (from other methods in the object). In-between is \texttt{protected}: A protected method can be called by any object that is an instance of the same class or any subclass of the class. There are various ways to specify the visibility of a method. Perhaps the simplest is within the class definition you can put \texttt{public}, \texttt{private}, or \texttt{protected} between method definitions. Reading top-down, the most recent visibility specified holds for all methods until the next visibility is specified. There is an implicit \texttt{public} before the first method in the class. To make the contents of an instance variable available and/or mutable, we can easily define getter and setter methods, which by convention we can give the same name as the instance variable. For example: \begin{verbatim} def foo @foo end def foo= x @foo = x end \end{verbatim} If these methods are public, now any code can access the instance variable @foo indirectly, by calling foo or foo=. It sometimes makes sense to instead make these methods protected if only other objects of the same class (or subclasses) should have access to the instance variables. As a cute piece of syntactic sugar, when calling a method that ends in a = character, you can have spaces before the =. Hence you can write e.foo = bar instead of e.foo= bar. The advantage of the getter/setter approach is it remains an implementation detail that these methods are implemented as getting and setting an instance variable. We, or a subclass implementer, could change this decision later without clients knowing. We can also omit the setter to ensure an instance variable is not mutated except perhaps by a method of the object. As an example of a “setter method” that is not actually a setter method, a class could define: ```ruby def celsius_temp= x @kelvin_temp = x + 273.15 end ``` A client would likely imagine the class has a @celsius_temp instance variable, but in fact it (presumably) does not. This is a good abstraction that allows the implementation to change. Because getter and setter methods are so common, there is shorter syntax for defining them. For example, to define getters for instance variables @x, @y, and @z and a setter for @x, the class definition can just include: ```ruby attr_reader :y, :z # defines getters attr_accessor :x # defines getters and setters ``` A final syntactic detail: If a method m is private, you can only call it as m or m(args). A call like x.m or x.m(args) would break visibility rules. A call like self.m or self.m(args) would not break visibility, but still is not allowed. ### Some Syntax, Semantics, and Scoping To Get Used To Ruby has a fair number of quirks that are often convenient for quickly writing useful programs but may take some getting used to. Here are some examples; you will surely discover more. - There are several forms of conditional expressions, including e1 if e2 (all on one line), which evaluates e1 only if e2 is true (i.e., it reads right-to-left). - Newlines are often significant. For example, you can write ```ruby if e1 e2 else e3 end ``` But if you want to put this all on one line, then you need to write if e1 then e2 else e3 end. Note, however, indentation is never significant (only a matter of style). - Conditionals can operate on any object and treat every object as “true” with two exceptions: false and nil. • As discussed above, you can define a method with a name that ends in =, for example: ```ruby def foo= x @blah = x * 2 end ``` As expected, you can write `e.foo=(17)` to change `e`'s `@blah` instance variable to be 34. Better yet, you can adjust the parentheses and spacing to write `e.foo = 17`. This is just syntactic sugar. It “feels” like an assignment statement, but it is really a method call. Stylistically you do this for methods that mutate an object’s state in some “simple” way (like setting a field). • Where you write `this` in Java/C#/C++, you write `self` in Ruby. • Remember variables (local, instance, or class) get automatically created by assignment, so if you mis-spell a variable in an assignment, you end up just creating a different variable. **Everything is an Object** Everything is an object, including numbers, booleans, and `nil` (which is often used like `null` in Java). For example, `-42.abs` evaluates to 42 because the `Fixnum` class defines the method `abs` to compute the absolute value and `-42` is an instance of `Fixnum`. (Of course, this is a silly expression, but `x.abs` where `x` currently holds `-42` is reasonable.) All objects have a `nil?` method, which the class of `nil` defines to return `true` but other classes define to return `false`. Like in ML and Racket, every expression produces a result, but when no particular result makes sense, `nil` is preferred style (much like ML’s `()` and Racket’s void-object). That said, it is often convenient for methods to return `self` so that subsequent method calls to the same object can be put together. For example, if the `foo` method returns `self`, then you can write `x.foo(14).bar("hi")` instead of ```ruby x.foo(14) x.bar("hi") ``` There are many methods to support *reflection* — learning about objects and their definition during program execution — that are defined for all objects. For example, the method `methods` returns an array of the names of the methods defined on an object and the method `class` returns the class of the object.\(^2\) Such reflection is occasionally useful in writing flexible code. It is also useful in the REPL or for debugging. **The Top-Level** You can define methods, variables, etc. outside of an explicit class definition. The methods are implicitly added to class `Object`, which makes them available from within any object’s methods. Hence all methods are really part of some class.\(^3\) Top-level expressions are evaluated in order when the program runs. So instead of Ruby specifying a main class and method with a special name (like `main`), you can just create an object and call a method on it at top-level. \(^2\)This class is itself just another object. Yes, even classes are objects. \(^3\)This is not entirely true because modules are not classes. Class Definitions are Dynamic A Ruby program (or a user of the REPL) can change class definitions while a Ruby program is running. Naturally this affects all users of the class. Perhaps surprisingly, it even affects instances of the class that have already been created. That is, if you create an instance of `Foo` and then add or delete methods in `Foo`, then the already-created object “sees” the changes to its behavior. After all, every object has a class and the (current) class (definition) defines an object’s behavior. This is usually dubious style because it breaks abstractions, but it leads to a simpler language definition: defining classes and changing their definitions is just a run-time operation like everything else. It can certainly break programs: If I change or delete the `+` method on numbers, I would not expect many programs to keep working correctly. It can be useful to add methods to existing classes, especially if the designer of the class did not think of a useful helper method. The syntax to add or change methods is particularly simple: Just give a class definition including method definitions for a class that is already defined. The method definitions either replace definitions for methods previously defined (with the same name method name) or are added to the class (if no method with the name previously existed). Duck Typing Duck typing refers to the expression, “If it walks like a duck and quacks like a duck, then it’s a duck” though a better conclusion might be, “then there is no reason to concern yourself with the possibility that it might not be a duck.” In Ruby, this refers to the idea that the class of an object (e.g., “Duck”) passed to a method is not important so long as the object can respond to all the messages it is expected to (e.g., “walk to x” or “quack now”). For example, consider this method: ```ruby def mirror_update pt pt.x = pt.x * -1 end ``` It is natural to view this as a method that must take an instance of a particular class `Point` (not shown here) since it uses methods `x` and `x=` defined in it. And the `x` getter must return a number since the result of `pt.x` is sent the `*` message with `-1` for multiplication. But this method is more generally useful. It is not necessary for `pt` to be an instance of `Point` provided it has methods `x` and `x=`. Moreover, the `x` and `x=` methods need not be a getter and setter for an instance variable `@x`. Even more generally, we do not need the `x` method to return a number. It just has to return some object that can respond to the `*` message with argument `-1`. Duck typing can make code more reusable, allowing clients to make “fake ducks” and still use your code. In Ruby, duck typing basically “comes for free” as long you do not explicitly check that arguments are instances of particular classes using methods like `instance_of?` or `is_a?` (discussed below when we introduce subclassing). Duck typing has disadvantages. The most lenient specification of how to use a method ends up describing the whole implementation of a method, in particular what messages it sends to what objects. If our specification reveals all that, then almost no variant of the implementation will be equivalent. For example, if we know i is a number (and ignoring clients redefining methods in the classes for numbers), then we can replace `i+i` with `i*2` or `2*i`. But if we just assume `i` can receive the `+` message with itself as an argument, then we cannot do these replacements since `i` may not have a `*` method (breaking `i*2`) or it may not be the sort of object that `2` expects as an argument to `*` (breaking `2*i`). **Arrays** The `Array` class is very commonly used in Ruby programs and there is special syntax that is often used with it. Instances of `Array` have all the uses that arrays in other programming languages have — and much, much more. Compared to arrays in Java/C#/.C/etc., they are much more flexible and dynamic with fewer operations being errors. The trade-off is they can be less efficient, but this is usually not a concern for convenient programming in Ruby. In short, all Ruby programmers are familiar with Ruby arrays because they are the standard choice for any sort of collection of objects. In general, an array is a mapping from numbers (the indices) to objects. The syntax `[e1,e2,e3,e4]` creates a new array with four objects in it: The result of `e1` is in index 0, the result of `e2` is in index 1, and so on. (Notice the indexing starts at 0.) There are other ways to create arrays. For example, `Array.new(x)` creates an array of length `x` with each index initially mapped to `nil`. We can also pass blocks (see below for what blocks actually are) to the `Array.new` method to initialize array elements. For example, `Array.new(x) { 0 }` creates an array of length `x` with all elements initialized to `0` and `Array.new(5) {|i| -i }` creates the array `[0,-1,-2,-3,-4]`. The syntax for getting and setting array elements is similar to many other programming languages: The expression `a[i]` gets the element in index `i` of the array referred to by `a` and `a[i] = e` sets the same array index. As you might suspect in Ruby, we are really just calling methods on the `Array` class when we use this syntax. Here are some simple ways Ruby arrays are more dynamic and less error-causing than you might expect compared to other programming languages: - As usual in a dynamically typed language, an array can hold objects that are instances of different classes, for example `[14, "hi", false, 34]. - Negative array indices are interpreted from the end of the array. So `a[-1]` retrieves the last element in the array `a`, `a[-2]` retrieves the second-to-last element, etc. - There are no array-bounds errors. For the expression `a[i]`, if `a` holds fewer than `i+1` objects, then the result will just be `nil`. Setting such an index is even more interesting: For `a[i]=e`, if `a` holds fewer than `i+1` objects, then the array will grow dynamically to hold `i+1` objects, the last of which will be the result of `e`, with the right number of `nil` objects between the old last element and the new last element. - There are many methods and operations defined in the standard library for arrays. If the operation you need to perform on an array is at all general-purpose, peruse the documentation since it is surely already provided. As two examples, the `+` operator is defined on arrays to mean concatenation (a new array where all of the left-operand elements precede all of the right-operand elements), and the `|` operator is like the `+` operator except it removes all duplicate elements from the result. In addition to all the conventional uses for arrays, Ruby arrays are also often used where in other languages we would use other constructs for tuples, stacks, or queues. Tuples are the most straightforward usage. After all, given dynamic typing and less concern for efficiency, there is little reason to have separate constructs for tuples and arrays. For example, for a triple, just use a 3-element array. For stacks, the \texttt{Array} class defines convenient methods \texttt{push} and \texttt{pop}. The former takes an argument, grows the array by one index, and places the argument at the new last index. The latter shrinks the array by one index and returns the element that was at the old last index. Together, this is exactly the last-in-first-out behavior that defines the behavior of a stack. (How this is implemented in terms of actually growing and shrinking the underlying storage for the elements is of concern only in the implementation of \texttt{Array}.) For queues, we can use \texttt{push} to add elements as just described and use the \texttt{shift} method to dequeue elements. The \texttt{shift} method returns the object at index 0 of the array, removes it from the array, and shifts all the other elements down one index, i.e., the object (if any) previously at index 1 is now at index 0, etc. Though not needed for simple queues, \texttt{Array} also has an \texttt{unshift} method that is like \texttt{push} except it puts the new object at index 0 and moves all other objects up by 1 index (growing the array size by 1). Arrays are even more flexible than described here. For example, there are operations to replace any sequence of array elements with the elements of any other array, even if the other array has a different length than the sequence being replaced (hence changing the length of the array). Overall, this flexible treatment of array sizes (growing and shrinking) is different from arrays in some other programming languages, but it is consistent with treating arrays as maps from numeric indices to objects. What we have not shown so far are operations that perform some computation using all the contents of an array, such as mapping over the elements to make a new array, or computing a sum of them. That is because the Ruby idioms for such computations use \textit{blocks}, which we introduce next. ### Passing Blocks While Ruby has while loops and for loops not unlike Java, most Ruby code does not use them. Instead, many classes have methods that take \textit{blocks}. These blocks are \textit{almost} closures. For example, integers have a \texttt{times} method that takes a block and executes it the number of times you would imagine. For example, \begin{verbatim} x.times { puts "hi" } \end{verbatim} prints "hi" 3 times if x is bound to 3 in the environment. Blocks are closures in the sense that they can refer to variables in scope where the block is defined. For example, after this program executes, \(y\) is bound to 10: \begin{verbatim} y = 7 [4,6,8].each { y += 1 } \end{verbatim} Here \([4,6,8]\) is an array with with 3 elements. Arrays have a method \texttt{each} that takes a block and executes it once for each element. Typically, however, we want the block to be passed each array element. We do that like this, for example to sum an array’s elements and print out the running sum at each point: \begin{verbatim} sum = 0 [4,6,8].each { |x| sum += x puts sum } \end{verbatim} Blocks, surprisingly, are not objects. You cannot pass them as “regular” arguments to a method. Rather, any method can be passed either 0 or 1 blocks, separate from the other arguments. As seen in the examples above, the block is just put to the right of the method call. It is also after any other “regular” arguments. For example, the `inject` method is like the `fold` function we studied in ML and we can pass it an initial accumulator as a regular argument: ```ruby sum = [4,6,8].inject(0) { |acc,elt| acc + elt } ``` (It turns out the initial accumulator is optional. If omitted, the method will use the array element in index 0 as the initial accumulator.) In addition to the braces syntax shown here, you can write a block using `do` instead of `{ and `end` instead of }`. This is generally considered better style for blocks more than one line long. When calling a method that takes a block, you should know how many arguments will be passed to the block when it is called. For the `each` method in `Array`, the answer is 1, but as the first example showed, you can ignore arguments if you have no need for them by omitting the `|...|`. Many collections, including arrays, have a variety of block-taking methods that look very familiar to functional programmers, including `map`. As another example, the `select` method is like the function we called `filter`. Other useful iterators include `any?` (returns true if the block returns true for any element of the collection), `all?` (returns true if the block returns true for every element of the collection), and several more. ### Using Blocks While many uses of blocks involve calling methods in the standard library, you can also define your own methods that take blocks. (The large standard library just makes it somewhat rare to need to do this.) You can pass a block to any method. The method body calls the block using the `yield` keyword. For example, this code prints "hi" 3 times: ```ruby def foo x if x yield else yield yield end end foo true { puts "hi" } foo false { puts "hi" } ``` To pass arguments to a block, you put the arguments after the `yield`, e.g., `yield 7` or `yield(8,"str")`. Using this approach, the fact that a method may expect a block is implicit: it is just that its body might use `yield`. An error will result if `yield` is used and no block was passed. The behavior when the block and the `yield` disagree on the number of arguments is somewhat flexible and not described in full detail here. A method can use the `block_given?` primitive to see if the caller provided a block. You are unlikely to use this method often: If a block is needed, it is conventional just to assume it is given and have `yield` fail if it is not. In situations where a method may or may not expect a block, often other regular arguments determine whether a block should be present. If not, then `block_given?` is appropriate. Here is a recursive method that counts how many times it calls the block (with increasing numbers) before the block returns a true result. def count i if yield i 1 else 1 + (count(i+1) { |x| yield x }) end end The odd thing is that there is no direct way to pass the caller’s block as the callee’s block argument. But we can create a new block { |x| yield x } and the lexical scope of the yield in its body will do the right thing. If blocks were actually function closures that we could pass as objects, then this would be unnecessary function wrapping. The Proc Class Blocks are not quite closures because they are not objects. We cannot store them in a field, pass them as a regular method argument, assign them to a variable, put them in an array, etc. (Notice in ML and Racket, we could do the equivalent things with closures.) Hence we say that blocks are not “first-class values” because a first-class value is something that can be passed and stored like anything else in the language. However, Ruby has “real” closures too: The class Proc has instances that are closures. The method call in Proc is how you apply the closure to arguments, for example x.call (for no arguments) or x.call(3,4). To make a Proc out of a block, you can write lambda { ... } where { ... } is any block. Interestingly, lambda is not a keyword. It is just a method in class Object (and every class is a subclass of Object, so lambda is available everywhere) that creates a Proc out of a block it is passed. You can define your own methods that do this too; consult the documentation for the syntax to do this. Usually all we need are blocks, such as in these examples that pass blocks to compute something about an array: ```ruby a = [3,5,7,9] b = a.map { |x| x + 1 } i = b.count { |x| x >= 6 } ``` But suppose we wanted to create an array of blocks, i.e., an array where each element was something we could “call” with a value. You cannot do this in Ruby because arrays hold objects and blocks are not objects. So this is an error: ```ruby c = a.map { |x| { |y| x >= y } } # wrong, a syntax error ``` But we can use lambda to create an array of instances of Proc: ```ruby c = a.map { |x| lambda { |y| x >= y } } ``` Now we can send the call message to elements of the c array: ```ruby c[2].call 17 j = c.count { |x| x.call(5) } ``` Ruby’s design is an interesting contrast from ML and Racket, which just provide full closures as the natural choice. In Ruby, blocks are more convenient to use than Proc objects and suffice in most uses, but programmers still have Proc objects when needed. Is it better to distinguish blocks from closures and make the more common case easier with a less powerful construct, or is it better just to have one general fully powerful feature? Hashes and Ranges The Hash and Range classes are two standard-library classes that are also very common but probably a little less common than arrays. Like arrays, there is special built-in syntax for them. They are also similar to arrays and support many of the same iterator methods, which helps us re-enforce the concept that “how to iterate” can be separated from “what to do while iterating.” A hash is like an array except the mapping is not from numeric indices to objects. Instead, the mapping is from (any) objects to objects. If a maps to b, we call a a key and b a value. Hence a hash is a collection that maps a set of keys (all keys in a hash are distinct) to values, where the keys and values are just objects. We can create a hash with syntax like this: {"SML" => 7, "Racket" => 12, "Ruby" => 42} As you might expect, this creates a hash with keys that here are strings. It is also common (and more efficient) to use Ruby’s symbols for hash keys as in: {:sml => 7, :racket => 12, :ruby => 42} We can get and set values in a hash using the same syntax as for arrays, where again the key can be anything, such as: h1['a'] = "Found A" h1[false] = "Found false" h1['a'] h1[false] h1[42] There are many methods defined on hashes. Useful ones include keys (return an array of all keys), values (similar for values), and delete (given a key, remove it and its value from the hash). Hashes also support many of the same iterators as arrays, such as each and inject, but some take the keys and the values as arguments, so consult the documentation. A range represents a contiguous sequence of numbers (or other things, but we will focus on numbers). For example 1..100 represents the integers 1, 2, 3, ..., 100. We could use an array like Array.new(100) {i| i}, but ranges are more efficiently represented and, as seen with 1..100, there is more convenient syntax to create them. Although there are often better iterators available, a method call like (0..n).each {i| e} is a lot like a for-loop from 0 to n in other programming languages. It is worth emphasizing that duck typing lets us use ranges in many places where we might naturally expect arrays. For example, consider this method, which counts how many elements of a have squares less than 50: ```ruby def foo a a.count {|x| x*x < 50} end ``` We might naturally expect `foo` to take arrays, and calls like `foo [3,5,7,9]` work as expected. But we can pass to `foo` any object with a `count` method that expects a block taking one argument. So we can also do `foo (2..10)`, which evaluates to 6. ## Subclassing and Inheritance ### Basic Idea and Terminology Subclassing is an essential feature of class-based OOP. If class `C` is a subclass of `D`, then every instance of `C` is also an instance of `D`. The definition of `C` inherits the methods of `D`, i.e., they are part of `C`’s definition too. Moreover, `C` can extend by defining new methods that `C` has and `D` does not. And it can override methods, by changing their definition from the inherited definition. In Ruby, this is much like in Java. In Java, a subclass also inherits the field definitions of the superclass, but in Ruby fields (i.e., instance variables) are not part of a class definition because each object instance just creates its own instance variables. Every class in Ruby except `Object` has one superclass.\(^\text{4}\) The classes form a tree where each node is a class and the parent is its superclass. The `Object` class is the root of the tree. In class-based languages, this is called the **class hierarchy**. By the definition of subclassing, a class has all the methods of all its ancestors in the tree (i.e., all nodes between it and the root, inclusive), subject to overriding. ### Some Ruby Specifics - A Ruby class definition specifies a superclass with `class C < D ... end` to define a new class `C` with superclass `D`. Omitting the `< D` implies `< Object`, which is what our examples so far have done. - Ruby’s built-in methods for reflection can help you explore the class hierarchy. Every object has a `class` method that returns the class of the object. Consistently, if confusingly at first, a class is itself an object in Ruby (after all, every value is an object). The class of a class is `Class`. This class defines a method `superclass` that returns the superclass. - Every object also has methods `is_a?` and `instance_of?`. The method `is_a?` takes a class (e.g., `x.is_a? Integer`) and returns true if the receiver is an instance of `Integer` or any (transitive) subclass of `Integer`, i.e., if it is below `Integer` in the class hierarchy. The method `instance_of?` is similar but returns true only if the receiver is an instance of the class exactly, not a subclass. (Note that in Java the primitive `instanceof` is analogous to Ruby’s `is_a?`.) Using methods like `is_a?` and `instanceof` is “less object-oriented” and therefore often not preferred style. They are in conflict with duck typing. ### A First Example: `Point` and `ColorPoint` Here are definitions for simple classes that describe simple two-dimensional points and a subclass that adds a color (just represented with a string) to instances. ```ruby class Point attr_accessor :x, :y def initialize(x,y) @x = x @y = y end end ``` \(^\text{4}\)Actually, the superclass of `Object` is `BasicObject` and `BasicObject` has no superclass, but this is not an important detail, so we will ignore it. def distFromOrigin Math.sqrt(@x * @x + @y * @y) end def distFromOrigin2 Math.sqrt(x * x + y * y) end class ColorPoint < Point attr_accessor :color def initialize(x, y, c="clear") super(x, y, c="clear") @color = c end end There are many ways we could have defined these classes. Our design choices here include: - We make the @x, @y, and @color instance variables mutable, with public getter and setter methods. - The default “color” for a ColorPoint is "clear". - For pedagogical purposes revealed below, we implement the distance-to-the-origin in two different ways. The distFromOrigin method accesses instance variables directly whereas distFromOrigin2 uses the getter methods on self. Given the definition of Point, both will produce the same result. The initialize method in ColorPoint uses the super keyword, which allows an overriding method to call the method of the same name in the superclass. This is not required when constructing Ruby objects, but it is often desired. Why Use Subclassing? We now consider the style of defining colored-points using a subclass of the class Point as shown above. It turns out this is good OOP style in this case. Defining ColorPoint is good style because it allows us to reuse much of our work from Point and it makes sense to treat any instance of ColorPoint as though it “is a” Point. But there are several alternatives worth exploring because subclassing is often overused in object-oriented programs, so it is worth considering at program-design time whether the alternatives are better than sub- classing. First, in Ruby, we can extend and modify classes with new methods. So we could simply change the Point class by replacing its initialize method and adding getter/setter methods for @color. This would be appropriate only if every Point object, including instances of all other subclasses of Point, should have a color or at least having a color would not mess up anything else in our program. Usually modifying classes is not a modular change — you should do it only if you know it will not negatively affect anything in the program using the class. Second, we could just define ColorPoint “from scratch,” copying over (or retyping) the code from Point. In a dynamically typed language, the difference in semantics (as opposed to style) is small: instances of ColorPoint will now return false if sent the message is_a? with argument Point, but otherwise they will work the same. In languages like Java/C#/C++, superclasses have effects on static typing. One advantage of not subclassing Point is that any later changes to Point will not affect ColorPoint — in general in class-based OOP, one has to worry about how changes to a class will affect any subclasses. Third, we could have ColorPoint be a subclass of Object but have it contain an instance variable, call it @pt, holding an instance of Point. Then it would need to define all of the methods defined in Point to forward the message to the object in @pt. Here are two examples, omitting all the other methods (x=, y=, distFromOrigin, distFromOrigin2): ```ruby def initialize(x,y,c="clear") @pt = Point.new(x,y) @color = c end def x @pt.x # forward the message to the object in @pt end ``` This approach is bad style since again subclassing is shorter and we want to treat a ColorPoint as though it “is a” Point. But in general, many programmers in object-oriented languages overuse subclassing. In situations where you are making a new kind of data that includes a pre-existing kind of data as a separate sub-part of it, this instance-variable approach is better style. ### Overriding and Dynamic Dispatch Now let’s consider a different subclass of Point, which is for three-dimensional points: ```ruby class ThreeDPoint < Point attr_accessor :z def initialize(x,y,z) super(x,y) @z = z end def distFromOrigin d = super Math.sqrt(d * d + @z * @z) end def distFromOrigin2 d = super Math.sqrt(d * d + z * z) end end ``` Here, the code-reuse advantage is limited to inheriting methods x, x=, y, and y=, as well as using other methods in Point via super. Notice that in addition to overriding initialize, we used overriding for distFromOrigin and distFromOrigin2. Computer scientists have been arguing for decades about whether this subclassing is good style. On the one hand, it does let us reuse quite a bit of code. On the other hand, one could argue that a ThreeDPoint is not conceptually a (two-dimensional) Point, so passing the former when some code expects the latter could be inappropriate. Others say a ThreeDPoint is a Point because you can “think of it” as its projection onto the plane where z equals 0. We will not resolve this legendary argument, but you should appreciate that often subclassing is bad/confusing style even if it lets you reuse some code in a superclass. The argument against subclassing is made stronger if we have a method in \texttt{Point} like \texttt{distance} that takes another (object that behaves like a) \texttt{Point} and computes the distance between the argument and \texttt{self}. If \texttt{ThreeDPoint} wants to override this method with one that takes another (object that behaves like a) \texttt{ThreeDPoint}, then \texttt{ThreeDPoint} instances will \textit{not} act like \texttt{Point} instances: their \texttt{distance} method will fail when passed an instance of \texttt{Point}. We now consider \textit{a much} more interesting subclass of \texttt{Point}. Instances of this class \texttt{PolarPoint} behave equivalently to instances of \texttt{Point} except for the arguments to \texttt{initialize}, but instances use an internal representation in terms of polar coordinates (radius and angle): ```ruby class PolarPoint < Point def initialize(r,theta) @r = r @theta = theta end def x @r * Math.cos(@theta) end def y @r * Math.sin(@theta) end def x= a b = y # avoids multiple calls to \texttt{y} method @theta = Math.atan(b / a) @r = Math.sqrt(a*a + b*b) self end def y= b a = y # avoid multiple calls to \texttt{y} method @theta = Math.atan(b / a) @r = Math.sqrt(a*a + b*b) self end def distFromOrigin @r end # distFromOrigin2 already works!! end ``` Notice instances of \texttt{PolarPoint} do not have instance variables \texttt{@x} and \texttt{@y}, but the class does override the \texttt{x}, \texttt{x=}, \texttt{y}, and \texttt{y=} methods so that clients cannot tell the implementation is different (modulo round-off of floating-point numbers): they can use instances of \texttt{Point} and \texttt{PolarPoint} interchangeably. A similar example in Java would still have fields from the superclass, but would not use them. The advantage of \texttt{PolarPoint} over \texttt{Point}, which admittedly is for sake of example, is that \texttt{distFromOrigin} is simpler and more efficient. The key point of this example is that the \texttt{subclass does not override distFromOrigin2, but the inherited method works correctly}. To see why, consider the definition in the superclass: ```ruby def distFromOrigin2 Math.sqrt(x * x + y * y) end ``` Unlike the definition of distFromOrigin, this method uses other method calls for the arguments to the multiplications. Recall this is just syntactic sugar for: ```ruby def distFromOrigin2 Math.sqrt(self.x() * self.x() + self.y() * self.y()) end ``` In the superclass, this can seem like an unnecessary complication since self.x() is just a method that returns @x and methods of Point can access @x directly, as distFromOrigin does. However, overriding methods x and y in a subclass of Point changes how distFromOrigin2 behaves in instances of the subclass. Given a PolarPoint instance, its distFromOrigin2 method is defined with the code above, but when called, self.x and self.y will call the methods defined in PolarPoint, not the methods defined in Point. This semantics goes by many names, including dynamic dispatch, late binding, and virtual method calls. There is nothing quite like it in functional programming, since the way self is treated in the environment is special, as we discuss in more detail next. ### The Precise Definition of Method Lookup The purpose of this discussion is to consider the semantics of object-oriented language constructs, particularly calls to methods, as carefully as we have considered the semantics of functional language constructs, particularly calls to closures. As we will see, the key distinguishing feature is what self is bound to in the environment when a method is called. The correct definition is what we call dynamic dispatch. The essential question we will build up to is given a call `e0.m(e1,e2,...,en)`, what are the rules for “looking up” what method definition m we call, which is a non-trivial question in the presence of overriding. But first, let us notice that in general such questions about how we “look up” something are often essential to the semantics of a programming language. For example, in ML and Racket, the rules for looking up variables led to lexical scope and the proper treatment of function closures. And in Racket, we had three different forms of let-expressions exactly because they have different semantics for how to look up variables in certain subexpressions. In Ruby, the variable-lookup rules for local variables in methods and blocks are not too different from in ML and Racket despite some strangeness from variables not being declared before they are used. But we also have to consider how to “look up” instance variables, class variables, and methods. In all cases, the answer depends on the object bound to self — and self is treated specially. In any environment, self maps to some object, which we think of as the “current object” — the object currently executing a method. To look up an instance variable @x, we use the object bound to self — each object has its own state and we use self’s state. To look up a class variable @@x, we just use the state of the object bound to self.class instead. To look up a method m for a method call is more sophisticated... In class-based object-oriented languages like Ruby, the rule for evaluating a method call like `e0.m(e1,...,en)` is: - Evaluate e0, e1, ..., en to values, i.e., objects obj0, obj1, ..., objn. - Get the class of obj0. Every object “knows its class” at run-time. Think of the class as part of the state of obj0. - Suppose obj0 has class A. If m is defined in A, call that method. Otherwise recur with the superclass of A to see if it defines m. Raise a “method missing” error if neither A nor any of its superclasses define m. (Actually, in Ruby the rule is actually to instead call a method called method_missing, which any class can define, so we again start looking in A and then its superclass. But most classes do not define method_missing and the definition of it in Object raises the error we expect.) - We have now found the method to call. If the method has formal arguments (i.e., argument names or parameters) \(x_1, x_2, \ldots, x_n\), then the environment for evaluating the body will map \(x_1\) to \(obj_1\), \(x_2\) to \(obj_2\), etc. But there is one more thing that is the essence of object-oriented programming and has no real analogue in functional programming: We always have self in the environment. **While evaluating the method body, self is bound to obj0, the object that is the “receiver” of the message.** The binding of self in the callee as described above is what is meant by the synonyms “late-binding,” “dynamic dispatch,” and “virtual method calls.” It is central to the semantics of Ruby and other OOP languages. It means that when the body of \(m\) calls a method on self (e.g., self.someMethod 34 or just someMethod 34), we use the class of \(obj0\) to resolve someMethod, not necessarily the class of the method we are executing. This is why the PolarPoint class described above works as it does. There are several important comments to make about this semantics: - Ruby’s mixins complicate the lookup rules a bit more, so the rules above are actually simplified by ignoring mixins. When we study mixins, we will revise the method-lookup semantics accordingly. - This semantics is quite a bit more complicated than ML/Racket function calls. It may not seem that way if you learned it first, which is common because OOP and dynamic dispatch seem to be a focus in many introductory programming courses. But it is truly more complicated: we have to treat the notion of self differently from everything else in the language. Complicated does not necessarily mean it is inferior or superior; it just means the language definition has more details that need to be described. This semantics has clearly proved useful to many people. - Java and C# have significantly more complicated method-lookup rules. They do have dynamic dispatch as described here, so studying Ruby should help understand the semantics of method lookup in those languages. But they also have static overloading, in which classes can have multiple methods with the same name but taking different types (or numbers) of arguments. So we need to not just find some method with the right name, but we have to find one that matches the types of the arguments at the call. Moreover, multiple methods might match and the language specifications have a long list of complicated rules for finding the best match (or giving a type error if there is no best match). In these languages, one method overrides another only if its arguments have the same type and number. None of this comes up in Ruby where “same method name” always means overriding and we have no static type system. In C++, there are even more possibilities: we have static overloading and different forms of methods that either do or do not support dynamic dispatch. ### Dynamic Dispatch Versus Closures To understand how dynamic dispatch differs from the lexical scope we used for function calls, consider this simple ML code that defines two mutually recursive functions: ```ml fun even x = if x=0 then true else odd (x-1) and odd x = if x=0 then false else even (x-1) ``` This creates two closures that both have the other closure in their environment. If we later shadow the even closure with something else, e.g., fun even x = false that will not change how odd behaves. When odd looks up even in the environment where odd was defined, it will get the function on the first line above. That is “good” for understanding how odd works just from looking where is defined. On the other hand, suppose we wrote a better version of even like: fun even x = (x mod 2) = 0 Now our odd is not “benefiting from” this optimized implementation. In OOP, we can use (abuse?) subclassing, overriding, and dynamic dispatch to change the behavior of odd by overriding even: class A def even x if x==0 then true else odd(x-1) end end def odd x if x==0 then false else even(x-1) end end end class B < A def even x # changes B’s odd too! x % 2 == 0 end end Now (B.new.odd 17) will execute faster because odd’s call to even will resolve to the method in B – all because of what self is bound to in the environment. While this is certainly convenient in the short example above, it has real drawbacks. We cannot look at one class (A) and know how calls to the code there will behave. In a subclass, what if someone overrode even and did not know that it would change the behavior of odd? Basically, any calls to methods that might be overridden need to be thought about very carefully. It is likely often better to have private methods that cannot be overridden to avoid problems. Yet overriding and dynamic dispatch is the biggest thing that distinguishes object-oriented programming from functional programming. Implementing Dynamic Dispatch Manually in Racket Let’s now consider coding up objects and dynamic dispatch in Racket using nothing more than pairs and functions. This serves two purposes: - It demonstrates that one language’s semantics (how the primitives like message send work in the language) can typically be coded up as an idiom (simulating the same behavior via some helper functions) in another language. This can help you be a better programmer in different languages that may not have the features you are used to. - It gives a lower-level way to understand how dynamic dispatch “works” by seeing how we would do it manually in another language. An interpreter for an object-oriented language would have to do something similar for automatically evaluating programs in the language. --- 5 Though we did not study it, Racket has classes and objects, so you would not actually want to do this in Racket. The point is to understand dynamic dispatch by manually coding up the same idea. Also notice that we did an analogous exercise to better understand closures earlier in the course: We showed how to get the effect of closures in Java using objects and interfaces or in C using function pointers and explicit environments. Our approach will be different from what Ruby (or Java for that matter) actually does in these ways: - Our objects will just contain a list of fields and a list of methods. This is not “class-based,” in which an object would have a list of fields and a class-name and then the class would have the list of methods. We could have done it that way instead. - Real implementations are more efficient. They use better data structures (based on arrays or hashtables) for the fields and methods rather than simple association lists. Nonetheless, the key ideas behind how you implement dynamic dispatch still come through. By the way, we are wise to do this in Racket rather than ML, where the types would get in our way. In ML, we would likely end up using “one big datatype” to give all objects and all their fields the same type, which is basically awkwardly programming in a Racket-like way in ML. (Conversely, typed OOP languages are often no friendlier to ML-style programming unless they add separate constructs for generic types and closures.) Our objects will just have fields and methods: ``` (struct obj (fields methods)) ``` We will have **fields** hold an immutable list of **mutable** pairs where each element pair is a symbol (the field name) and a value (the current field contents). With that, we can define helper functions **get** and **set** that given an object and a field-name, return or mutate the field appropriately. Notice these are just plain Racket functions, with no special features or language additions. We do need to define our own function, called **assoc-m** below, because Racket’s **assoc** expects an immutable list of immutable pairs. ``` (define (assoc-m v xs) (cond [(null? xs) #f] [(equal? v (mcar (car xs))) (car xs)] [#t (assoc-m v (cdr xs))])) ``` ``` (define (get obj fld) (let ([pr (assoc-m fld (obj-fields obj))]) (if pr (mcdr pr) (error "field not found"))))) ``` ``` (define (set obj fld v) (let ([pr (assoc-m fld (obj-fields obj))]) (if pr (set-mcdr! pr v) (error "field not found"))))) ``` More interesting is calling a method. The **methods** field will also be an association list mapping method names to functions (no mutation needed since we will be less dynamic than Ruby). The key to getting dynamic dispatch to work is that these functions will all take an extra **explicit** argument that is **implicit** in languages with built-in support for dynamic dispatch. This argument will be “self” and our Racket helper function for sending a message will simply pass in the correct object: (define (send obj msg . args) (let ([pr (assoc msg (obj-methods obj))]) (if pr ((cdr pr) obj args) (error "method not found" msg))))) Notice how the function we use for the method gets passed the “whole” object obj, which will be used for any sends to the object bound to self. (The code above uses Racket’s support for variable-argument functions because it is convenient — we could have avoided it if necessary. Here, send can take any number of arguments greater than or equal to 2. The first argument is bound to obj, the second to msg, and all others are put in a list (in order) that is bound to args. Hence we expect (cdr pr) to be a function that takes two arguments: we pass obj for the first argument and the list args for the second argument.) Now we can define make-point, which is just a Racket function that produces a point object: (define (make-point _x _y) (obj (list (mcons 'x _x) (mcons 'y _y)) (list (cons 'get-x (lambda (self args) (get self 'x))) (cons 'get-y (lambda (self args) (get self 'y))) (cons 'set-x (lambda (self args) (set self 'x (car args)))) (cons 'set-y (lambda (self args) (set self 'y (car args)))) (cons 'distToOrigin (lambda (self args) (let ([a (send self 'get-x)] [b (send self 'get-y)]) (sqrt (+ (* a a) (* b b)))))))) Notice how each of the methods takes a first argument, which we just happen to call self, which has no special meaning here in Racket. We then use self as an argument to get, set, and send. If we had some other object we wanted to send a message to or access a field of, we would just pass that object to our helper functions by putting it in the args list. In general, the second argument to each function is a list of the “real arguments” in our object-oriented thinking. By using the get, set, and send functions we defined, making and using points “feels” just like OOP: (define p1 (make-point 4 0)) (send p1 'get-x) ; 4 (send p1 'get-y) ; 0 (send p1 'distToOrigin) ; 4 (send p1 'set-y 3) (send p1 'distToOrigin) ; 5 Now let’s simulate subclassing... Our encoding of objects does not use classes, but we can still create something that reuses the code used to define points. Here is code to create points with a color field and getter/setter methods for this field. The key idea is to have the constructor create a point object with make-point and then extend this object by creating a new object that has the extra field and methods: We can use “objects” returned from \texttt{make-color-point} just like we use “objects” returned from \texttt{make-point}, plus we can use the field \texttt{color} and the methods \texttt{get-color} and \texttt{set-color}. The essential distinguishing feature of OOP is dynamic dispatch. Our encoding of objects “gets dynamic dispatch right” but our examples do not yet demonstrate it. To do so, we need a “method” in a “superclass” to call a method that is defined/overridden by a “subclass.” As we did in Ruby, let’s define polar points by adding new fields and overriding the \texttt{get-x}, \texttt{get-y}, \texttt{set-x}, and \texttt{set-y} methods. A few details about the code below: - As with color-points, our “constructor” uses the “superclass” constructor. - As would happen in Java, our polar-point objects still have \texttt{x} and \texttt{y} fields, but we never use them. - For simplicity, we just override methods by putting the replacements earlier in the method list than the overridden methods. This works because \texttt{assoc} returns the first matching pair in the list. Most importantly, the \texttt{distToOrigin} “method” still works for a polar point because the method calls in its body will use the procedures listed with \texttt{get-x} and \texttt{get-y} in the definition of \texttt{make-polar-point} just like dynamic dispatch requires. The correct behavior results from our \texttt{send} function passing the whole object as the first argument. (\texttt{define (make-polar-point _r _th)} \texttt{(let ([pt (make-point #f #f)]))} \texttt{(obj} \texttt{(append (list (mcons 'r _r) \texttt{(mcons 'theta _th))} \texttt{(obj-fields pt))} \texttt{(append} \texttt{(list 'set-r-theta} \texttt{(lambda (self args) \texttt{(begin \texttt{(set self 'r (car args))} \texttt{(set self 'theta (cadr args)))}) \texttt{(cons 'get-x (lambda (self args) \texttt{(let ([r (get self 'r)]} \texttt{[theta (get self 'theta)]) \texttt{(* r (cos theta)))}) \texttt{(cons 'get-y (lambda (self args) \texttt{(let ([r (get self 'r)]} \texttt{[theta (get self 'theta)]) \texttt{(* r (sin theta))))}}})}) We can create a polar-point object and send it some messages like this: ``` (define p3 (make-polar-point 4 3.1415926535)) (send p3 'get-x) ; 4 (send p3 'get-y) ; 0 (or a slight rounding error) (send p3 'distToOrigin) ; 4 (or a slight rounding error) (send p3 'set-y 3) (send p3 'distToOrigin) ; 5 (or a slight rounding error) ```
{"Source-Url": "https://courses.cs.washington.edu/courses/cse341/19au/unit7notes.pdf", "len_cl100k_base": 15830, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 62117, "total-output-tokens": 17153, "length": "2e13", "weborganizer": {"__label__adult": 0.0004019737243652344, "__label__art_design": 0.0003178119659423828, "__label__crime_law": 0.00020313262939453125, "__label__education_jobs": 0.00241851806640625, "__label__entertainment": 6.818771362304688e-05, "__label__fashion_beauty": 0.00014066696166992188, "__label__finance_business": 0.00015985965728759766, "__label__food_dining": 0.0003948211669921875, "__label__games": 0.0005006790161132812, "__label__hardware": 0.0004143714904785156, "__label__health": 0.0002465248107910156, "__label__history": 0.00018274784088134768, "__label__home_hobbies": 0.0001023411750793457, "__label__industrial": 0.00030112266540527344, "__label__literature": 0.0002605915069580078, "__label__politics": 0.00017762184143066406, "__label__religion": 0.0005083084106445312, "__label__science_tech": 0.0014019012451171875, "__label__social_life": 0.00012010335922241212, "__label__software": 0.00411224365234375, "__label__software_dev": 0.98681640625, "__label__sports_fitness": 0.00028324127197265625, "__label__transportation": 0.0004529953002929687, "__label__travel": 0.00022709369659423828}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 67392, 0.01178]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 67392, 0.69573]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 67392, 0.91257]], "google_gemma-3-12b-it_contains_pii": [[0, 3110, false], [3110, 6607, null], [6607, 8562, null], [8562, 12828, null], [12828, 15967, null], [15967, 18502, null], [18502, 21317, null], [21317, 24584, null], [24584, 28423, null], [28423, 31687, null], [31687, 34545, null], [34545, 36766, null], [36766, 39528, null], [39528, 42676, null], [42676, 45248, null], [45248, 47573, null], [47573, 49867, null], [49867, 53361, null], [53361, 57022, null], [57022, 59526, null], [59526, 62363, null], [62363, 64929, null], [64929, 67061, null], [67061, 67392, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3110, true], [3110, 6607, null], [6607, 8562, null], [8562, 12828, null], [12828, 15967, null], [15967, 18502, null], [18502, 21317, null], [21317, 24584, null], [24584, 28423, null], [28423, 31687, null], [31687, 34545, null], [34545, 36766, null], [36766, 39528, null], [39528, 42676, null], [42676, 45248, null], [45248, 47573, null], [47573, 49867, null], [49867, 53361, null], [53361, 57022, null], [57022, 59526, null], [59526, 62363, null], [62363, 64929, null], [64929, 67061, null], [67061, 67392, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, true], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 67392, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 67392, null]], "pdf_page_numbers": [[0, 3110, 1], [3110, 6607, 2], [6607, 8562, 3], [8562, 12828, 4], [12828, 15967, 5], [15967, 18502, 6], [18502, 21317, 7], [21317, 24584, 8], [24584, 28423, 9], [28423, 31687, 10], [31687, 34545, 11], [34545, 36766, 12], [36766, 39528, 13], [39528, 42676, 14], [42676, 45248, 15], [45248, 47573, 16], [47573, 49867, 17], [49867, 53361, 18], [53361, 57022, 19], [57022, 59526, 20], [59526, 62363, 21], [62363, 64929, 22], [64929, 67061, 23], [67061, 67392, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 67392, 0.0]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
eb09e85b4a8b37350f7d221ba97f531e35fe9f13
Zico: Efficient GPU Memory Sharing for Concurrent DNN Training Gangmuk Lim, UNIST; Jeongseob Ahn, Ajou University; Wencong Xiao, Alibaba Group; Youngjin Kwon, KAIST; Myeongjae Jeon, UNIST https://www.usenix.org/conference/atc21/presentation/lim Zico: Efficient GPU Memory Sharing for Concurrent DNN Training Gangmuk Lim UNIST Jeongseob Ahn Ajou University Wencong Xiao Alibaba Group Youngjin Kwon KAIST Myeongjae Jeon UNIST Abstract GPUs are the workhorse in modern server infrastructure fueling advances in a number of compute-intensive workloads such as deep neural network (DNN) training. Several recent works propose solutions on sharing GPU resources across multiple concurrent DNN training jobs, but none of them address rapidly increasing memory footprint introduced by such job co-locations, which greatly limit the effectiveness of sharing GPU resources. In this paper, we present Zico, the first DNN system that aims at reducing the system-wide memory consumption for concurrent training. Zico keeps track of the memory usage pattern of individual training job by monitoring its progress on GPU computations and makes memory reclaimed from the job globally sharable. Based on this memory management scheme, Zico automatically decides a strategy to share memory among concurrent jobs with minimum delay on training while not exceeding a given memory budget such as GPU memory capacity. Our evaluation shows that Zico outperforms existing GPU sharing approaches and delivers benefits over a variety of job co-location scenarios. 1 Introduction Recent advances in deep neural networks (DNNs) have made tremendous progress on a wide range of applications, including object detection [24], language model [11, 47], translation [40], and speech recognition [27]. As a number of new DNN models are being explored, developers take advantage of hardware accelerators to train the models, such as TPU [22] and GPU, which is the most popular choice. GPUs are the workhorse in server infrastructure and yet becoming highly contended resources at the same time [20, 43]. To utilize expensive GPU resources, efficient GPU sharing mechanisms have become indispensable. Prior work focuses on either temporarily multiplexing GPU in its entirety [26,43,44] or spatially sharing compute units [10]. The temporal sharing is a software mechanism that dedicates both compute cores and memory in GPU solely to single training for a time quantum (e.g., 1 minute). Despite the good flexibility, this approach often cannot efficiently utilize GPU resources. For example, most compute resources are left idle for common translation models such as GNMT [42] and language models such as RHN [47]. These training algorithms include a number of RNN modules [28], such as LSTM [12] and GRU [8] networks, which exhibit a small degree of data parallelism to GPU, causing under-utilization of GPU resources. As a different approach, the spatial sharing can provide better throughput than the temporal sharing as long as a single training job does not fully saturate GPU compute resources [44]. However, a limitation in applying the spatial sharing is the working set size of concurrent jobs, which grows substantially with the job co-location. If the working set does not fit in GPU memory, the system has to kill a job or swap GPU memory to the host, which overshadows the performance benefit of the spatial sharing. Therefore, to make the spatial sharing widely applicable, it is essential to reduce the memory footprint of co-located training jobs. We observe that sharing intermediate data generated during co-located training jobs significantly reduces the total memory footprint. Training is a highly iterative procedure first navigating layers in order (forward pass) and then the same layers in reverse order (backward pass) for each batch of input data. During the training procedure, intermediate outputs from model layers called feature maps dominate memory footprint [18,32]. Feature maps are generated in each layer during the forward pass and later consumed in the backward pass to update the layer. Due to the regular bi-directional execution, memory consumption in a single training job commonly exhibits a cyclic pattern — memory consumption gradually increases in the forward pass and then decreases in the backward pass. Thus, a simple yet effective strategy to save memory consumption is creating a large GPU memory pool and elastically sharing the memory pool for concurrent training jobs. To increase sharing opportunity, coordination of concurrent training jobs is needed to make them run different passes, e.g., forward pass for job A (increasing its working set) and backward pass for job B (decreasing its working set). This approach results in memory allocations of a job to happen simultaneously with memory deallocations of the other job, efficiently reducing the system-wide memory footprint. Despite that the sharing idea is plausible, the way today’s DNN frameworks execute training on GPU poses significant challenges. Current frameworks are mainly designed for a solo training case. Following dataflow dependency, they allocate the memory required for each DNN kernel computation ahead of time and issue as many kernels as possible to the GPU stream, i.e., work queue per job for its GPU computations, in order to saturate the GPU’s internal compute resources. This leads to GPU computations asynchronous and in parallel with CPU processing. Thus, the platforms are unaware of progress on the GPU computations and when memory is indeed consumed by GPU. Without proper handling of the asynchrony, shared memory does not guarantee correctness such as preventing memory corruptions in shared untapped memory of a waiting kernel by other training jobs. In this paper, we propose Zico, a DNN platform that enables efficient memory sharing for concurrent training. Zico retrofits a widely used DNN platform, TensorFlow, to maximize the overall throughput of concurrent training. The goal of Zico is finding the best coordinated executions of concurrent training to fully utilize GPU computational and memory resources. To achieve the goal, (i) Zico accurately monitors computational progress of training jobs. Based on that, Zico allocates and deallocates memory for DNN kernels, informing memory usage patterns closer to GPU’s view. (ii) Zico incorporates runtime information (e.g., iteration times, memory usage patterns, and GPU memory limit) and executes a job scheduler, called scrooge scheduler, to efficiently steer concurrent jobs to utilize the shared memory pool. (iii) Zico efficiently organizes the entire GPU space as an elastic shared memory pool to support scrooge scheduler. To detect computational progress of asynchronous kernels, Zico leverages a specific kernel called CUDA event, which notifies progress of GPU kernels. Zico uses CUDA event to identify allocation and release time of memory used by a GPU kernel. Based on the information, Zico executes our novel scrooge scheduler to forecast the memory consumption trend of concurrent training at the iteration boundary and introduces the minimum stall time on each iteration. Nevertheless, the memory usage trend of the co-scheduled jobs varies according to how they interfere each other in the use of GPU compute units. To apply their dynamic behaviors, scrooge scheduler refines decisions based on feedback collected at runtime. Zico organizes the memory pool as a collection of chunks called regions and separates their uses based on data characteristics. DNN training generates several types of data as tensors, categorized mainly as ephemeral tensors with high occurrences and long-lived tensors like feature maps which constitute the most memory footprint. By separating regions by type, Zico ensures that memory stored with feature maps does not interfere with other transient data while making their demands follow the cyclic pattern of training iteration. This design choice allows our scheduling decisions to be applied with little disruption without losing sharing opportunity. Being prototyped on a popular DNN framework, TensorFlow, we evaluate Zico experimentally using six models ranging from translation to object detection on V100 and RTX 2080 Ti GPUs. The results show that Zico enables effective memory sharing over a wide range of memory consumption trends. For high memory footprint, Zico is up to 8.3x and 1.6x faster than traditional spatial sharing and temporal sharing approaches, respectively, especially when concurrently training non-identical models. Furthermore, for low memory footprint, where no stall on concurrent training is needed, Zico behaves similarly to traditional spatial sharing and is up to 1.6x faster than temporal sharing. Overall, Zico achieves speedups, regardless of whether concurrent training is based on the same or distinct models. 2 Background 2.1 Deep Neural Network Training The training process typically relies on iterative optimization methods like stochastic gradient descent (SGD) [13], Momentum [39], and Adam [23]. In each iteration, forward pass (FP) is followed by backward pass (BP) on a batch of training dataset. During FP, by computing on the layer’s input, weights, and bias, each layer outputs feature maps to be used as an input to the next downstream layer. At the end of FP, the last layer produces a loss representing the accuracy of the current model on the input batch. Using the loss value, BP computes the gradients by flowing the layers in reverse order and aggregates the gradient values to update model parameters (i.e., layers’ weights and bias). On finishing BP, the training repeats FP and BP on the next batch. As the batch size is usually fixed, the computation load and the memory usage characteristic are usually very similar across iterations [15,43]. It is widely known that model parameters occupy only a small fraction of memory, and the majority is consumed to store feature maps generated in the FP computation [18,32,43]. BP needs feature maps to calculate the gradients at each layer. Hence, unless recomputed [6,14,19], feature maps are usually kept on memory for a long time until they are no longer accessed in BP. The amount of memory consumption is determined by several factors, such as the number of layers, layer size and type, input batch size, etc. There is also other intermediate data training iteration creates, e.g., gradient maps that represent the output of each layer during BP; local data local in each kernel, etc. They are all ephemeral as memory is released soon after its allocation [18,32]. For brevity, we assume all memory allocations in DNN training are based on tensors. 2.2 GPU Sharing Use Cases Users run training either on shared GPU clusters or on dedicated servers. For both cases, GPU sharing is becoming a fundamental technique to better utilize GPU resources. In this subsection, we introduce two specific scenarios that can take advantage of sharing GPUs. Hyperparameter tuning (inter-job). With the increasing popularity of applications fueled by DNN, a number of new models are being developed by DNN practitioners every day. A model for developing exposes many high-level properties, e.g., learning rate and momentum, as hyperparameters that need to be optimized. This task is known as hyperparameter tuning [3]. As hyperparameters constitute a large search space, there are several popular tools such as Hyperdrive [35] and HyperOpt [4] that automate hyperparameter optimization and construct a new model with the best (or desired) quality for users. These tools usually generate a large number of closely related training jobs (as much as 100s [26, 43]) that explore a different set of hyperparameters for the same reference model. Nevertheless, spatial GPU sharing has greater performance potential for this workload, as discussed in Section 7. Hyperparameter tuning jobs dominate training workloads run atop shared GPU clusters [20, 26, 43]. To get them timely done on heavily contended GPUs, prior works propose several techniques such as temporal and spatiotemporal sharing to apportion a single GPU over multiple training jobs [43, 44]. Gradient accumulation (intra-job). Gradient accumulation is a promising method to speed up model convergence when hyperparameters other than batch size are stabilized. It runs a set of consecutive mini-batches and accumulates the gradients of those mini-batches before updating model parameters. The essential goal is to give an illusion of training on a large batch that better improves convergence without oversubscribing GPU memory by using small mini-batches. A common practice has been to process these mini-batches sequentially. Nonetheless, efficient spatial GPU sharing can offer sharing incentive to concurrent training on mini-batches. One might wonder whether spatial sharing on this training is indeed plausible. Based on our observation, translation or speech recognition models (e.g., GNMT [42]) often underutilize GPU compute resources, making it beneficial to share GPU computes. On top of it, our system supports highly effective sharing of GPU memory, enabling in concurrent training each to use a mini-batch size slightly smaller, if not the same, than the original mini-batch size. Altogether, our system opens up a new opportunity to speed up training for gradient accumulation, which we will discuss in Section 7. 2.3 Spatial GPU Sharing NVIDIA has developed Multi-Process Service (MPS) [10], an alternative way to share GPU among multiple CUDA processes. With MPS, NVIDIA V100 GPU supports up to 48 processes to run concurrently on a single GPU, with each process assigned with separate GPU compute resources, i.e., SMs [10]. In NVIDIA A100 [29], a newer generation, GPU sharing architecture further partitions HW paths in the memory system, e.g., memory controllers and address busses, to prevent the concurrent processes from interfering with each other when demands for memory bandwidth are high. NVIDIA’s GPU sharing mechanisms have two commonalities. First, they are mainly designed for sharing “compute resources” spatially. Second, they attribute GPU sharing to demands for protection among untrusted users requiring strong isolation. Since not all use cases require strong isolation among training jobs, e.g., hyperparameter tuning driven by a single user [26], recent work supports a spatial GPU sharing similar to MPS in a single process domain [44]. Regardless of protection level, the underlying mechanism enabling spatial sharing within GPU is very similar, if not the same — and so is the resulting performance. 3 Challenges for Memory Sharing GPU has a limited amount of HW resources, requiring it to be used in high efficiency. As GPU’s compute and memory resources are shared to run concurrent training limiting per-training resource capacity, it is crucial to thoroughly understand the current practices in DNN frameworks and uncover challenges for spatial GPU training. In this section, we discuss three major challenges to address in Zico. 3.1 Memory Bloating Major DNN frameworks [1, 5, 31] typically maintain feature maps in memory until they are no longer accessed. As discussed earlier, feature maps have a relatively longer lifespan between the first access and the last access, making the most of in-use memory consumed to store the feature maps. Figure 1 compares cumulative distributions of lifespans for feature maps and other data in NASNet training. As the figure shows, feature maps exhibit longer lifespans with 134ms on average and 234ms as median value as opposed to 18ms on average and 2ms as median value in the other data. We further investigate cumulative distributions of tensor sizes of the two data types, showing that feature maps are larger. In consequence, as shown in Figure 2, feature maps lead to the peak memory consumption substantially higher than the minimum that corresponds to the model size in each iteration. We call this issue memory bloating. Traditional spatial GPU sharing mechanisms are vulnerable to memory bloating. DNN frameworks like TensorFlow do not tend to be designed for memory sharing with internal memory manager maintaining a local pool of memory for single training. Thus, no memory release to GPU happens. This results in peak memory usages from concurrent training, all adding up and consequently saturating GPU memory even in the modest memory footprint for individual training. To illustrate, let us consider concurrent training of two identical models, with each demanding more than half of GPU memory as an example. In this case, the memory demand across the two local memory pools exceeds GPU memory capacity, beginning to take advantage of CPU-side memory as a swap space. To facilitate this, recent NVIDIA GPUs, including V100, provide a feature known as Unified Virtual Memory (UVM), which is transparent to DNN platforms. We found that using UVM for DNN training is currently costly and severely affects overall performance despite great flexibility. To confirm the effect, we compare throughput between solo training versus concurrent training for ResNet-50 using NVIDIA V100 when a training job occupies 70% GPU memory. To make the comparison fair, we configure a single training job to use 50% GPU resources set by MPS. There is a dramatic throughput degradation in concurrent training (i.e., 8 times slower) as it suffers from GPU memory oversubscription. Therefore, we should decrease the risk of GPU memory being used up during concurrent training. ### 3.2 Workload Variability As an additional challenge, we explain a workload characteristic that makes memory optimization for spatial GPU sharing fundamentally complicated. Although all models follow a cyclic pattern in memory usage, as shown in Figure 2, memory usage patterns are inherently different across models. For example, ResNet-50 has a beefy shape in which memory bloating appears for a fairly large time duration. In contrast, GNMT has a lean shape in which peak memory appears for a short period and quickly disappears. Therefore, such variability must be taken into account in designing a scheduling policy for concurrent training. Nonetheless, we take advantage of an observation that a similar memory usage pattern repeats over iterations for a single model. DNN frameworks require that computation kernels be ordered in a specific way, e.g., following topological sort, thus keeping the corresponding memory operations ordered across iterations [32]. So, we believe this determinism is prevalent in most of the training tasks. ### 3.3 Asynchrony with GPU Processing CPU processing in DNN frameworks is in parallel with GPU computations. Before issuing a kernel to a GPU stream, the memory manager in the platform allocates tensors required for the kernel computation. After issuing the kernel and returning immediately, CPU processing brings back its control and can do any subsequent task asynchronously with GPU computations. Meanwhile, GPU may or may not execute the issued kernel depending on whether earlier kernels are still pending or not. Driven by this GPU-specific property, DNN frameworks produce a static schedule of DNN kernels mainly customized for single training, in which kernel operations are ordered based on dependencies every iteration. DNN frameworks usually allocate the memory required for kernel operations in sequence ahead of time while issuing as many kernels as possible to the stream to saturate GPU. This way of exercising GPU for single training has been common because there is no concern regarding correctness in memory allocations. Specifically, suppose we use a released tensor memory from a kernel to allocate it for the next kernel, even though the earlier kernel is not completed by GPU yet. In that case, memory corruption does not occur since kernels in the GPU stream are processed sequentially. However, it is critical to make the CPU process keep track of GPU memory usage trends precisely to enable memory sharing for concurrent training. To illustrate the current limitation concretely, let us show effective memory observed in TensorFlow during training ResNet-50 in Figure 3. In the figure, training exhibits a cyclic pattern for a short period followed by a long pause due to kernels pending on GPU during an iteration. Unlike the CPU’s view, the actual memory usage trend from GPU’s view appears in Figure 2(a), which perfectly follows a continuous cycling pattern. In summary, for concurrent training with multiple GPU streams, today’s approaches that make memory sharing decisions based on CPU’s view are vulnerable to memory cor- 4 Design Overview Zico aims at providing efficient spatial GPU sharing by enabling coordinated job scheduling and GPU memory management for concurrent training. As a system currently built on TensorFlow, the framework keeps tracking the lifespans of memory used in each training and produces a schedule of concurrent training executions to avoid GPU memory over-subscription. We seek to achieve the following goals in the design of Zico: - **High performance.** A single iteration runs for a short time duration from tens to hundreds of milliseconds. Thus, a modest overhead with memory sharing in each iteration can manifest as a long delay in the entire training. We should attempt to minimize such overhead and achieve high performance. - **Wide model support.** Section 3.2 demonstrates a wide range of patterns in memory demand across models during training. Our memory sharing strategy should be general, not restricted to specific memory patterns. The concurrent training may not even expose similar or the identical models. - **Common approach.** Our approaches are mainly compatible with DNN platforms that express a job execution as a computation graph ahead of time, e.g., TensorFlow [1], Caffe [21], and MXNet [5]. The other platforms are based on imperative programming and train a model without constructing a computation graph, namely, the eager mode. The memory-aware scheduler in Zico makes decisions based only on observed memory demands at runtime, making it also applicable to the eager mode. Zico has two key system modules as shown in Figure 4: (i) scrooge scheduler, a processing unit that orchestrates executions of concurrent training driven by memory demand patterns; and (ii) memory manager, a unified memory allocator that handles both inter- and intra-training memory requests. **Scrooge scheduler.** Zico monitors GPU progress to capture precise memory usage in GPU for each training over time. On observing memory usage, Zico schedules concurrent jobs to achieve the best possible performance in training while putting the total memory consumption within a predefined budget, which does not exceed GPU memory capacity. This is a sophisticated problem to which a naive strategy rarely works. When memory is sufficient for running concurrent training without sharing, no coordination among jobs might be necessary. In contrast, when memory is tight, we need a schedule of concurrent training in order not to exceed the given memory budget. A simple approach towards saving memory consumption can be coordinating training jobs in such a way that forward pass execution (i.e., memory allocation phase) is overlapped with backward pass execution (i.e., memory deallocation phase) as much as possible. However, when memory demand patterns are beefy such as in Figure 2(a), the system-wide memory usage can blow up because the memory demand in the forward pass grows fast while the backward pass shrinks its memory demand relatively slowly. To provide concrete guidance for memory sharing, we devise scrooge scheduler that facilitates an objective function helping forecast the system-wide memory demand in the future. Before starting a new iteration, the scheduler takes into account the lifespans of in-use memory regions, which are sharable memory units in Zico. Then, the scheduler predicts whether allowing the iteration immediately will consume memory less than the memory budget. If this is nearly impossible, the scheduler estimates the minimum stall time to be applied on the new iteration so that memory allocations in the forward pass to be issued later are safely fulfilled when more memory is available. It is essential to maintain precise region lifespan in order to make a correct decision in the scheduling. To meet this goal, Zico iteratively refines region lifespan based on feedback collected from prior iterations at runtime. Scrooge scheduler is agnostic to programming model in DNN platforms and only relies on information on memory demand patterns. For this reason, Zico is able to perform spatial memory sharing as long as memory requests are appropriately clustered on regions and their lifespans can be estimated. As explained next, we facilitate tensor types to constitute regions for sharing, but many different ways are indeed possible — e.g., classification based on tensor access time intervals in eager mode [2]. **Memory manager.** Zico organizes the entire GPU memory space into a collection of regions, which is a contiguous memory space that stores a set of tensors in the same type. Using regions is a natural choice for us to mitigate the sharing overhead and to keep memory stored with feature maps not contended with other tensor data. Using regions further assists job scheduling decision to be made promptly. Scrooge scheduler makes use of memory demand pattern that changes dynami- cally within an iteration. If we consider memory changes at the granularity of tensor, we may need to investigate too many time points along the way, putting a strain on the scheduler. Details will be discussed in Section 5. The memory manager separates memory space into two areas, *shareable* and *non-shareable*. The shareable area represents currently unused memory that can be granted to any in-flight training in need of more space (mainly for feature maps), whereas the non-shareable area constitutes job-local memory pools. Zico analyzes the computational graph and extracts type information on tensors at runtime. During training iterations, allocation requests for feature map tensors are always served from their own regions first in the local memory pool. If regions in the local memory pool are used up, the memory manager assigns a new region from the sharable area. In general, feature maps demand the most regions in the system and these regions are mainly shared across concurrent training jobs. The current design of Zico limits two training jobs to share a common memory area since many models we observe, including models in Figure 2, exhibit rather beefy memory demand patterns or high GPU utilization. For co-locating more than two jobs, a feasible approach is to organize the jobs into a group of pairs and schedule each pair independently with its own memory budget. This is a natural extension to Zico, so we leave it as a future work. **Protection level.** We provide Zico as a single framework instance mainly for performance reason. Existing multi-process solutions such as MPS do not promise good performance for elastic memory space sharing across different processes. For example, to grant memory across two MPS processes, we require invoking CUDA APIs such as cudaMalloc and cudaFree quite frequently at each training iteration. Among these API, cudaFree is known to stall GPU’s pipelined execution upon invoked [9], making itself harmful if recklessly used. Our measurement also reveals that a single ResNet-50 training that allocates memory using these APIs becomes 7x slower than the training that allocates memory locally. Note that key scenarios discussed in Section 2.2 are enough to train models in the same protection domain. Apart from it, we design Zico to be useful for a variety of scenarios as long as isolation is not a primary concern, e.g., the same tenancy with trusting users in a shared GPU cluster. ## 5 Scheduling Algorithm In this section, we formalize the scheduling problem for concurrent training and introduce the scrooge scheduler. All related implementation details on how to obtain memory usage patterns are explained in Section 6. ### 5.1 Problem Definition We make use of memory consumption at the region level to shape the memory pattern of a job. Despite the coarser-grained leveling of memory utilized by tensors, using regions still provides meaningful information to compute memory sharing potential. Since regions allocated for feature maps are the main target for sharing, in the problem formulation, we assume all regions for a job have the lifetime that spans the forward-backward passes for a single iteration. Formally, we denote $M$ regions required for an iteration of a training job as $\{R_i : 1 \leq i \leq M\}$ following the allocation order, with region $R_i$ having two parameters to indicate its *lifespan*: $\text{Time}(R_i(a))$ for the allocation time and $\text{Time}(R_i(d))$ for the deallocation time. Assume at time moment $T$ that there are $K$ active regions in the system that indicate the sum of memory footprint of the concurrent jobs. To achieve efficient GPU memory provisioning, we need to minimize the time delay to be applied on each training iteration without overcommitting the system-wide memory budget $C$. This objective turns into the following formulation: $$\arg\min_{\text{TimeShift}} (\text{Time}(R_1(d)) - \text{Time}(R_1(a)) + \text{TimeShift})$$ subject to $$\sum_{i=1}^{K} \text{Size}(\hat{R}_i) \leq C$$ at any time $T$ over a training iteration, where $\{\text{Size}(\hat{R}_i) : 1 \leq i \leq K\}$ are the sizes of active regions in the system. Intuitively, Equation 1 represents that an iteration, whose duration corresponds to $\text{Time}(R_1(d)) - \text{Time}(R_1(a))$, must be delayed by the minimum $\text{TimeShift}$. Note that there are other costs such as model synchronization in distributed training that affect training time. They are mostly static [15,25,37,45] and can be easily factored in. ### 5.2 Time Shift Model From the perspective of memory sharing, the worst case possible is having forward passes of all training jobs run simultaneously. This may lead to no memory sharing opportunity as regions being freed out in the backward pass of a job may not be used by the other training job. Therefore, when it comes to exceeding the memory budget $C$, scrooge scheduler adds a time delay driven by the $\text{TimeShift}$ parameter to a training iteration appropriately to ensure that memory allocations during the forward pass occur when enough memory is available. Since DNN training is highly periodic and deterministic, when training on an apportioned GPU compute capacity is stabilized, we see almost no variations on the region lifespans over iterations. Moreover, for each iteration, allocation times for adjacent regions exhibit strong temporal dependency. In other words, the time interval between $\text{Time}(R_{i(a)})$ and When applied to the entire iteration, not an individual region. This observation suggests that Scrooge scheduler introduces a feedback-driven online process method to determine when the current iteration has to start. To fulfill the condition in Equation 2, the scrooge scheduler also needs to know the memory demand trend of the co-located job B. Assume that job B deallocates its R_i, i.e., R_i^{Job B}, when the scheduler attempts to schedule job A’s forward pass. Then, scrooge scheduler knows that during Time(R_i^{Job B}(d)) - Time(R_i^{Job A}(a)), job B will use ∑_i Size(R_i^{Job B}) amount of memory. As such, as job B gradually deallocates its in-use regions from R_i^{Job B} to R_i^{Job B} in order over time, job B will ultimately release ∑_i Size(R_i^{Job B}) amount of memory after Time(R_i^{Job B}(d)) - Time(R_i^{Job B}(a)). With this memory trend information on job A and job B, scrooge scheduler can decide during Job B’s backward pass, if Job A can start by computing 1) the amount of memory required by Job A as time progresses and 2) the amount of memory released by Job B as time progresses in terms of regions. This brings out the system-wide active regions as time progresses, which scrooge scheduler uses to predict if memory consumption will always fit in the defined memory budget, i.e., if Equation 2 is satisfied. In the informed phase, we first use a memory budget which is lower than the actual memory budget C to calculate region lifespans under a conservative schedule that incurs a smaller interference among jobs and thus a less aggressive memory sharing. At this time, the execution of concurrent training is far from optimal, i.e., having long time shifts. From this point, scrooge scheduler iteratively optimizes the objective function. The scheduler gradually increases the lowered memory budget to allow more interference over time and keeps updating region lifespans until it reaches back the memory budget C. It is necessary to calculate region lifespans under co-location with such adaptation because co-scheduling jobs that together have >100% compute demands than the GPU’s capacity will inadvertently interfere with the lifespan calculations. Note that scrooge scheduler works regardless of training two heterogeneous models or when forward passes of the two jobs overlap. Although scheduling is a per-iteration process, it operates at a low cost as memory patterns are already discretized into region level (the scheduling overhead will be discussed in Section 7.4). In reality, the slowdown for each training is very predictable when two training jobs exhibit similar GPU compute demands. A rare but challenging case is when a beefy model is trained along with a lean model which suddenly suffers from a dramatic slowdown from co-location. This case results in memory not being released from the lean model for a long time while the beefy model keeps allocating memory, leading to the system-wide --- **5.3 Memory Sharing Algorithm** Now, we explain how scrooge scheduler works to enable spatial memory sharing. Scrooge scheduler optimizes for the minimum possible iteration time based on Equation 1 at runtime. To solve the problem, the scheduling algorithm must address two challenges: C-1) The lifespan of the region, L(R_i), changes according to how two training jobs execute under co-location; C-2) While L(R_i) is changing, the schedule has to find an optimal TimeShift in Equation 1. Scrooge scheduler performs iterative searching steps to reach the goal. For C-1, scrooge scheduler introduces a feedback-driven online process in which the scheduler monitors lifespans of all R_i during the current step and updates them to use in the next step. For C-2, at each step, scrooge scheduler decides TimeShift of the co-located training jobs toward minimizing their iteration times. After several steps, the lifespans are adjusted enough and stabilized. TimeShift should be estimated in an iteration basis to determine when the current iteration has to start. **Profiling phase (The first search step)**. When a new training job is issued, scrooge scheduler initiates profiling phase during which it collects basic information on the new job. In particular, the scheduler runs the first iteration of the new job in an isolated manner. During this profiling phase, scrooge scheduler identifies regions by type and obtains lifespan for each feature map region in the solo run. **Informed phase.** After the profile phase, scrooge scheduler knows useful information for co-located jobs. In this informed phase, for a new iteration to be started for a job (e.g., job A) with TimeShift = 0, the scheduler navigates through time progress using lifespan information and predicts if the memory constraint in Equation 2 is violated or not. If violated, the scheduler waits for time T, which leads to TimeShift += T, and does the prediction again. This time-shifting continues until the scheduler meets the memory constraint — this is guaranteed as the other co-located job (e.g., job B) will ultimately release all regions at the end of its backward pass. To illustrate, for job A’s forward pass, Time(R_i^{Job A}(a)) - Time(R_i^{Job A}(a)) indicates the time duration in which the job A uses Size(R_i^{Job A}(a)) amount of memory, which corresponds to the allocation of the first region R_i^{Job A}. Likewise, Time(R_i^{Job A}(a)) - Time(R_i^{Job A}(a)) indicates the time taken for the entire forward pass in which job A gradually demands more memory by allocating regions from R_i^{Job A} to R_i^{Job A}. In this way, scrooge scheduler can forecast the memory demand trend for job A’s forward pass and similarly the trend for its backward pass. memory usage going over GPU memory capacity. In this case, Zico gives up an attempt on spatial GPU sharing and falls back to time-multiplexing. **Deadlock potential.** With concurrent jobs in phases of increasing memory allocation, the deadlock could happen when there is insufficient memory to allocate to these in-flight jobs. Scrooge scheduler does not start the current iteration of a training job if global memory consumption would go beyond the memory budget during its forward-backward passes. In the worst case, scrooge scheduler will schedule the current iteration when the co-located job finishes its iteration (releasing all memory), guaranteeing training progress similar to temporal sharing and thus preventing the deadlock. Such planned execution should be accompanied by tracking of memory used by GPU, which we will explain in the next section. 6 Memory Management with Concurrency We discuss how to track GPU memory usage for sharing, classify tensors according to usage type, and manage GPU memory for tensors of different types using regions. 6.1 Tracking Memory Usage in GPU Most of existing DNN platforms are mainly customized for single-job training and unaware of memory sharing among concurrent training jobs. As described in Section 3.3, inherent asynchrony between CPU and GPU does not incur any correctness issue for memory operations when there is only one job to run on the platform. However, for concurrent training with each job assigned with its own GPU stream, we should not make memory released by CPU readily available for the co-located jobs, since kernel computations to be launched on the multiple GPU streams are independent and unordered. In essence, the memory corruption could occur because there is no dependency among kernels in different GPU streams. We apply two techniques to support efficient memory sharing without the corruption. **Memory deallocation.** For memory deallocation, rather than immediately releasing the memory based on CPU point of view, we must wait until GPU actually completes the corresponding kernel execution. Zico facilitates **CUDA event** to meet this requirement. CUDA event is a specific kernel that can be inserted into the GPU stream and monitored by CPU for its completion. Once GPU reaches a CUDA event, it is guaranteed that kernels launched before the detected CUDA event have finished the execution. Hence, tensors of those kernels can be safely released if no longer accessed. Tracking in-use memory with CUDA event is not cost-free, requiring CPU cycles. To reduce the overhead, Zico inserts CUDA event periodically at a certain memory allocation granularity (currently, 8 MB for CNNs and 256 MB for RNNs). We discuss the sensitivity of the granularity in Section 7.4. **Memory allocation.** The memory tracking brings out another issue, namely, *early memory allocation*. Although deallocation of memory occurs after the actual kernel completion as a result of the memory tracking, its allocation occurs by CPU when the kernel is issued to the GPU stream. Thus, the allocation time is typically earlier than the time the kernel actually starts its execution, accesses the memory, and completes the execution within GPU. It is always desirable to maintain a small number of in-flight kernels (i.e., kernels pending on GPU stream), since this way would narrow the above time gap and ultimately avoid unnecessary pre-allocations of memory by CPU: otherwise, the system makes memory allocations too early compared to the actual time the memory is utilized by GPU kernels. Oftentimes, we found memory consumption soars up due to a number of in-flight kernels issued at full CPU speed. To address this issue, Zico limits the number of in-flight kernels to a certain level just sufficient to keep GPU busy. Currently, the right number is obtained via offline profiling for each model while not causing a loss in overall training performance. It can be also easily found online by running a few iterations over varying numbers of in-flight kernels to be limited in the GPU. 6.2 Tensor Classification With a computation graph constructed for training, Zico differentiates the tensors used by GPU kernel operators forming the graph. In general, we classify the current tensors into three types: feature map tensors, gradient map tensors, and temporary tensors, where temporary tensors are neither feature maps nor gradient maps. A temporary tensor is mostly created by an operator and used internally, not later accessed by other operators. To correctly classify tensors in this way, we exploit three pieces of information available for a tensor: by considering the operator creating the tensor as the source operator, (i) whether the source belongs to the forward pass; (ii) whether there is any destination operator accessing the tensor outside the source; and (iii) whether the destination belongs to the backward pass. Feature map tensors will meet all three conditions while gradient map tensors are distinguished from complying with (ii) and (iii) only. The remaining tensors are all sorted into temporary tensors. For the proposed tensor classification, we need to identify whether a computation kernel is involved in the forward pass or the backward pass. Memory manager in Zico pinpoints the peak memory as the starting point of the backward pass. This method exploits the fundamental property of DNN training that a forward pass is a memory allocation phase and a backward pass is a memory deallocation phase. This method is a simple but generic approach that does not depend on DNN implementation and does not belong to a specific framework. 6.3 Managing Memory Regions Based on the tensor classification, the memory manager in Zico accepts the tensor type as parameter and then allocates the tensor on a region according to the type. The region-based memory management is a basic mechanism in TensorFlow and we extend it to build our own memory sharing system. The essential goal of Zico’s memory management policy is to promote spatial sharing with low interference between co-scheduled training jobs. Based on separating sharable regions (global) from non-sharable regions (job-local), we assure no contention occurs when non-sharable regions are enough to allocate new tensors for the local job. Further, within the local regions, temporary tensors are stored exclusively on a few regions managed by their own free block lists which manage empty memory space for allocating new tensors. Temporary tensors are small in size and frequently allocated and soon deallocated, thus contending with other types of tensors for accessing free block lists unless managed separately. The size of the sharable and non-sharable area changes elastically depending on runtime demands. As a region stores tensors in the same type, for feature map tensors the demand will increase and decrease within each iteration. Thus, during forward pass, the local memory manager will continuously request regions from the sharable area and then during the backward pass, these regions will be returned back to the sharable area. The free regions in the sharable area are shared through a free list for which updates are synchronized by a lock. To prevent the potential contention on the lock, the granularity of the regions needs to be chosen carefully. We experimentally validated different region sizes over diverse training jobs. From the sensitivity study, we chose the region size to be at least tens of MB to minimize lock contention. The results of the sensitivity study can be found in Section 7.4. 7 Evaluation Experimental setup. We implement Zico in TensorFlow 1.13.1 and compare it with spatial sharing using MPS (MPS)\(^2\) and temporal sharing with no job switching overhead (Temporal), which is similar to the approach taken in the state-of-the-art [44]. We select six training benchmarks across different DNN tasks including NASNet [48], ResNet-110 [16], ResNet-50 [16], GNMT [42], RHN [47], and BERT [11]. All models use the stochastic gradient descent (SGD) optimizer. The evaluation is performed on two machines. Machine1 has an NVIDIA Tesla V100 GPU with 32 GB GPU memory, 3.8 GHz Intel Xeon(R) Gold 5222 4 CPU cores and 64 GB of host memory. Machine2 has an NVIDIA RTX 2080 Ti GPU with 11 GB GPU memory, 3.8 GHz Intel Xeon(R) Gold 5222 4 CPU cores and 64 GB of host memory. Both machines run Ubuntu 16.04. We use Machine1 and Machine2 to evaluate large models and small models, respectively. 7.1 Training Same Models We first compare Zico, MPS, and Temporal when two identical models are concurrently trained. The memory budget in Zico is configured as GPU memory capacity. Figure 5 shows the throughput of the six models when training over different input batch sizes (i.e., number of samples) in x-axis. For each model, some of the batch sizes are chosen to have MPS exceed GPU memory capacity to show how effective Zico is in such cases. The figure shows that as compared to temporal sharing, Zico achieves higher throughput across all batch sizes in all models. In particular, Zico outperforms Temporal by on average 35% for NASNet and 37% for GNMT across the batch sizes. These results are rather surprising as the largest batch size in each model results in memory consumption in solo training that reaches close to the GPU memory limit. Even in such an extreme memory usage scenario, Zico finds an optimal time point to start the forward pass of a job while the backward pass of the other job is in progress. Therefore, Zico does not have any model being completely time-multiplexed, making it co-schedule the jobs more efficiently than Temporal. Zico achieves comparable throughput with MPS from small to modest batch sizes for each model. MPS is sometimes slightly better than Zico. This is not because MPS provides a better schedule for concurrent training but mainly because the underlying setup is different: Zico runs on a single framework and MPS runs two training jobs on different framework instances and processes. Nonetheless, throughput in MPS drops significantly when models are trained on large batch sizes. On training large batches, MPS suffers from GPU memory oversubscription that accompanies UVM overhead. Subsequently, as compared to MPS, Zico is up to 4.7 times faster across models. Note that the solo training of RHN incurs high memory usage even with small batch sizes, causing memory oversubscription for MPS across all batch sizes. Figure 6 presents the system-side memory usage (which sums up the memory usage of the two co-located jobs) to reveal the degree of GPU memory oversubscription handled by Zico. In Figure 7, we show how memory usage patterns are coordinated in Zico to reduce the system-wide memory footprint. For the space reason, we select only two models, ResNet-110 and BERT, for which concurrent training is scheduled slightly differently. In ResNet-110, almost no delay on each iteration occurs, i.e., $TimeShift \approx 0$, making it behave similar to the non-coordinated spatial GPU sharing. On the contrary, in each scheduling interval of BERT, there is a slight delay applied to every iteration to keep memory consumption within the budget. It is worth noting that for training the same models, the memory-aware schedules across iterations for a job are very regular, making scheduling decisions across the co-located jobs rather deterministic. We also found out training the same models entails an almost similar, if not the same, slowdown for each job. Hence, scrooge scheduler can quickly stabilize its memory-aware scheduling across jobs even without beginning from a low memory budget during the informed phase. In general, Zico delivers more benefit to less computation-intensive models such as GNMT. Over GPU generations, GPU compute capacity scales faster than GPU memory capacity does, keeping the bottleneck pushed towards GPU memory capacity. With this trend continued, the advantage of Zico over temporal sharing is likely to grow in the future. ### 7.2 Training Non-identical Models Now, we use two distinct models in concurrent training. In this experiment, we select five models to make combinations based on different GPU compute demands: GNMT (low), RHN (low), NASNet (high), ResNet-110 (high), and BERT (high). Figure 8 shows the throughput normalized by Temporal over diverse co-location combinations which all oversubscribe GPU memory capacity. In the figure, we put the memory demand of individual training in the parenthesis, computed as the percentage of GPU memory capacity, which is obtained by varying batch sizes for the model. For training non-identical models, we organize co-location combinations into three scenarios: (i) including both models with low GPU utilization (i.e., RH+GT), (ii) including one model with low GPU utilization (i.e., NN+GT and BT+GT), and (iii) including both models with high GPU utilization (i.e., NN+RN). First, Figure 8 shows that Zico significantly outperforms MPS regardless of GPU utilization between the co-located jobs. Zico is around 5.7x faster than MPS on average, specifically faster up to 5.1x in RH+GT, 8.3x in NN+GT, 6x in BT+GT; and 6.5x in NN+RN. Our conclusion repeats: MPS experiences significant performance degradation under GPU memory oversubscription. In comparison to Temporal, Zico achieves higher throughput by 42% in RH+GT, 46% in NN+GT, 27% in BT+GT, and 15% in NN+RN on average. That is, Zico favors scenario (i) and (ii) over (iii) because ample GPU cycles are available by running a model with low compute demand. Nonetheless, in Zico, since any of co-located jobs starts training once memory constraint is met, no fair use of compute resources is guaranteed. As a result, in the NN(30%)+GT(90%), Zico obtains a great throughput improvement for GT over Temporal but slightly worse performance for NN due to a bit fewer iterations scheduled within a time duration as opposed to Temporal. We leave balancing individual job throughput on top of increasing the aggregated throughput as future work. It is also worth noting that Zico achieves different scheduling ratio for the co-located jobs depending on their memory usage patterns. To illustrate, we show memory usage patterns in NN+NN in Figure 9, where ResNet-110 has relatively shorter iteration time compared to NASNet. On the given memory budget, Zico decides to schedule executions of the two jobs in such a way that ResNet-110 runs its iterations more frequently than NASNet does — Zico keeps scheduling ResNet-110 during time periods with low-to-moderate memory usages in NASNet to maximize GPU memory utilization. 7.3 Dynamic Memory Budget Change Recall that for co-locating more than two jobs, e.g., four jobs, we propose to organize the jobs into a group of pairs and schedule each pair independently with a lower memory budget. Then, when some of the jobs depart, the system would have fewer pairs and need to make a schedule based on the increased memory budget. Therefore, adapting to the memory budget change is a fundamental functionality required in Zico to deal with the dynamic workload. In this section, we evaluate Zico scheduling decisions when several continuous changes are made on the memory budget. Figure 10 shows how Zico schedules two NASNet training jobs while decreasing the memory budget and then increasing it back. Before the first change, the two jobs run concurrently with zero delay by virtue of scrooge scheduler between consecutive iteration executions of a job under the budget set as 70% GPU memory. Around A time point, the memory budget is set down to 50% and Zico begins to schedule the co-located jobs more conservatively. During this period, each job exhibits a wider gap (140 ms) between consecutive iterations. Around B time point, the memory budget returns back to 70% and Zico now takes a more aggressive schedule on memory sharing. At this moment, we attempt to further increase the memory budget to fully utilize GPU memory, but we do not see any change in both throughput and memory footprint. The reason is that 70% GPU memory is already enough to make an ideal scheduling decision for Zico with no TimeShift. That is, Zico does not overuse GPU memory unnecessarily. 7.4 Design Validation GPU memory tracking. As explained in Section 6.1, tracking in-use memory is essential to share memory with the correctness between multiple GPU streams. Table 1 shows the sensitivity of memory tracking granularity. If CUDA event is inserted too frequently, it imposes overhead on CPU and leads to delaying training iteration. For instance, inserting CUDA event for every GPU kernel launch slows down the training up to more than 50% in GNMT. To mitigate this overhead, CUDA event is inserted periodically in Zico. For models which use CPU intensively like GNMT, due to the significant number of light-weight kernels to issue, coarse-grained tracking is required to avoid this overhead. On the other hand, Table 1 shows that fine-grained tracking can achieve better throughput and memory efficiency for more intensive models like RHN and BERT. Table 1: Throughput with memory tracking (normalized to the throughput with no memory tracking). All, Fine-grained, and Coarse-grained track GPU memory for every kernel launch, 8 MB allocation, and 256 MB allocation, respectively. other hand, for models which use CPU less like CNN models and BERT, even fine-grained tracking does not bring out the actual delay of the training iteration. Zico chooses the right memory tracking granularity, minimizing the overhead. **Sharing granularity.** As mentioned in Section 6.3, if the sharing granularity is too fine-grained, e.g., tensor granularity, the contention of shared lock becomes non-negligible. Table 2 shows the sensitivity of different sharing granularity choices, where the size of small region is set to 512 KB. The table presents the normalized throughput with respect to using our default region size (64 MB) for memory sharing. The tensor-level sharing could introduce up to 10% of throughput degradation as shown in GNMT. **Scheduling.** Making a scheduling decision in our scrooge scheduler takes $O(n)$ time complexity, where $n$ is a small number of regions exercised by co-located jobs. The observed overhead is only a few hundreds of nanoseconds, and hence scrooge scheduler has nearly zero scheduling overhead. Moreover, the scheduling process of one job does not interfere with the scheduling process of counterpart co-located job, since each job has a dedicated CPU thread. ### 8 Related Work **Temporal/Spatial GPU sharing.** Temporal GPU sharing represents software-based techniques that time-share GPU for DNN workloads. Gandiva [43] proposes a GPU time-slicing mechanism for the first time to mainly accelerate hyperparameter tuning jobs. It initiates job switching at iteration boundary to reduce CPU-GPU communication overhead. Salus [44] tries to remove the switching overhead by making model parameters of a job resident in GPU memory even when the job is inactive. It further integrates a spatial sharing mechanism to harness underutilized memory in a similar way to MPS [10]. We faithfully compare Zico with both temporal and spatial sharing in Section 7. **Tensor swapping/recomputation.** Prior works make use of host memory as a swap storage for DNN training to mitigate memory footprint in GPU [17,32,36]. vDNN [36] predictively swaps tensors ahead of time to overlap CPU-GPU communication with GPU computation during training. It mainly focuses on swapping the inputs of convolutional layers as they tend to have long lifespans in CNN models. SwapAdvisor [17] considers memory allocation and operator scheduling to jointly optimize for swapping decisions. Capuchin [32] proposes a computational graph agnostic technique that estimates the costs of tensor swapping and recomputation to make the best choice between the two. Other prior works study dropping tensors created in forward pass and recomputing them in backward pass [6, 19, 41]. SuperNeurons [41] introduces a cost-aware recomputation technique to remove tensors from convolution layer, which are cheap to recompute. Checkmate [19] formulates tensor recomputation into an optimization problem and provides an approximation algorithm to recompute tensors timely. Similar to tensor swapping, tensor recomputation reduces memory footprint for a single training. The goal of Zico is different; Zico reduces global memory footprint for concurrent training. **Compression.** Many approaches were invented to reduce memory footprint of DNN training, including HW-based compression techniques [7, 30]. There are also a few SW-based memory compression techniques. Gist [18] proposes a series of layer-specific encoding techniques to compress tensors including feature maps. Echo [46] proposes a compression technique more effective on LSTM RNN training driven by internal operator dependencies. Zico is complementary and can be combined with tensor compression techniques. ### 9 Concluding Remarks We present our attempt on realizing GPU memory sharing across concurrent training. The proposed system, Zico, is the first introducing a memory-aware scheduler that coordinates training iterations among co-located jobs with minimum stall times. Zico works generally for co-locating both identical models or non-identical models regardless of the iteration time and the memory pattern of each model. Our experimental results show that Zico outperforms existing GPU sharing approaches. With growing model sizes, very large models such as GPT family [33,34,38] are preferred to run with model parallelism or data parallelism to accommodate intermediate tensors on GPU memory. Despite diverse parallelism in use, we believe Zico benefits both cases as we still see increasing and decreasing memory usage within an iteration. ### Acknowledgements We thank our shepherd, Nandita Vijaykumar, and anonymous reviewers for their valuable comments and suggestions. We also thank Peifeng Yu and Chanho Park for their knowledge sharing and technical support during this work. This work was supported by Samsung Advanced Institute of Technology, the 2021 Research Fund (1.210050.01) of UNIST(Ulsan National Institute of Science and Technology), Electronics and Telecommunications Research Institute(ETRI) grant funded by the Korean government (21ZS1300), and the National Research Foundation of Korea (NRF) grant funded by the Korea government(MSIT) (No. 2020R1C1C1014940 and NRF-2019R1C1C1005166). References
{"Source-Url": "https://www.usenix.org/system/files/atc21-lim.pdf", "len_cl100k_base": 12008, "olmocr-version": "0.1.50", "pdf-total-pages": 15, "total-fallback-pages": 0, "total-input-tokens": 47892, "total-output-tokens": 16410, "length": "2e13", "weborganizer": {"__label__adult": 0.00044846534729003906, "__label__art_design": 0.000919818878173828, "__label__crime_law": 0.0004162788391113281, "__label__education_jobs": 0.0011911392211914062, "__label__entertainment": 0.0002617835998535156, "__label__fashion_beauty": 0.00025463104248046875, "__label__finance_business": 0.00031828880310058594, "__label__food_dining": 0.0004253387451171875, "__label__games": 0.0012750625610351562, "__label__hardware": 0.00400543212890625, "__label__health": 0.0007920265197753906, "__label__history": 0.0004901885986328125, "__label__home_hobbies": 0.0001533031463623047, "__label__industrial": 0.0007653236389160156, "__label__literature": 0.00036025047302246094, "__label__politics": 0.0004777908325195313, "__label__religion": 0.0008726119995117188, "__label__science_tech": 0.458251953125, "__label__social_life": 0.00013387203216552734, "__label__software": 0.0216064453125, "__label__software_dev": 0.50537109375, "__label__sports_fitness": 0.0003299713134765625, "__label__transportation": 0.0006194114685058594, "__label__travel": 0.0002548694610595703}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 69729, 0.02499]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 69729, 0.27841]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 69729, 0.90218]], "google_gemma-3-12b-it_contains_pii": [[0, 247, false], [247, 4749, null], [4749, 10704, null], [10704, 15878, null], [15878, 20677, null], [20677, 25537, null], [25537, 31021, null], [31021, 36719, null], [36719, 42358, null], [42358, 46852, null], [46852, 51345, null], [51345, 54044, null], [54044, 59242, null], [59242, 66267, null], [66267, 69729, null]], "google_gemma-3-12b-it_is_public_document": [[0, 247, true], [247, 4749, null], [4749, 10704, null], [10704, 15878, null], [15878, 20677, null], [20677, 25537, null], [25537, 31021, null], [31021, 36719, null], [36719, 42358, null], [42358, 46852, null], [46852, 51345, null], [51345, 54044, null], [54044, 59242, null], [59242, 66267, null], [66267, 69729, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 69729, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 69729, null]], "pdf_page_numbers": [[0, 247, 1], [247, 4749, 2], [4749, 10704, 3], [10704, 15878, 4], [15878, 20677, 5], [20677, 25537, 6], [25537, 31021, 7], [31021, 36719, 8], [36719, 42358, 9], [42358, 46852, 10], [46852, 51345, 11], [51345, 54044, 12], [54044, 59242, 13], [59242, 66267, 14], [66267, 69729, 15]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 69729, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
f97dd2bcf30f53facdd8f722fd97fb78f692a8eb
Why We Need New Software Testing Technologies Carol Oliver, Ph.D. carol@carolcodes.com Abstract Mobile and IoT software must perform in a dramatically greater variety of environments than traditional software. Yet the core testing technologies in widespread use today do not directly address this vast environmental variability. Cloud-based device testing is bridging the gap for now, but it is an incomplete solution. This paper presents a Release-Readiness Levels Framework that provides vocabulary and a structure for discussing the gaps between what software testers would like to be able to test and what the existing tools and technologies enable them to test. This paper then identifies the existing software testing technologies that might be extended to better meet practitioner needs, describes the requirements for entirely new software testing technologies to target the needs of mobile and IoT software testing into the future, and offers a glimpse into how the emergence of new testing technology is likely to proceed. Biography Dr. Carol Oliver earned a B.S. in Computer Science from Stanford University and a Ph.D. in Computer Science from Florida Institute of Technology. In between, she worked for about 15 years for campus infrastructure IT services at Stanford University, where she did some web app and a lot of middleware software testing. Copyright Carol Oliver, 2019 1 Introduction My academic research argues that practitioners need a new generation of tools to support mobile and IoT (Internet of Things) software testing. In this paper, I will present a brief history showing how mobile and IoT software differs significantly from traditional software. Then I will present the Release-Readiness Levels Framework, a tool for discussing the possible extents of software testing on a project—and for features and smaller details within any project. Next, I will survey the Seven Core Software Testing Technologies available today, highlighting their strengths and limitations and how those different technologies can fulfill (or not) the needs of the Release-Readiness Levels. Finally, I will present the requirements for the next generation of software testing technology and tools, and set context for how the emergence of the new technology is likely to proceed. This paper is drawn from my Ph.D. dissertation (Oliver 2018) and many details and specific citations have been omitted from this paper in the interests of time and space; they are available in my dissertation, which is the primary reference for this paper. 1.1 Scope Note Many types of testing concerns need addressing when preparing software for production release. Application usability and accessibility, installation and update accuracy, security and performance, etc. All these are very important concerns, but this work limits its attention to Functionality Testing. 1.2 Vocabulary Note This work discusses code moving from Development Focus to Testing Focus. These phrases are used to emphasize that no adherence to any particular software development methodology is implied (e.g. Waterfall, Agile, etc.). In a company with separate departments for development and testing, this change in focus corresponds to sending the code from one department to another. In a one-person shop, this change of focus corresponds to removing the Developer Hat and donning the Tester Hat. Who does the work and when that work is done are not in scope in this paper. The scope of this paper is what work it is possible to do and why that work may be worthwhile. Development Focus work is about trying to find a way to make something work. Testing Focus work is about challenging what has been created to see if it will hold up no matter what adverse circumstances occur. These are two creative efforts with a shared purpose (the production of successful software) but different—and contradictory—goals. 2 Major Phases of Computing and Software Testing History tells how software testing came to be what it is today. A look at the arc of computing history reveals how software testing is inevitably entwined with the state of computing technology. The earliest computers created a 100% known software execution environment: Programs were written in each computer’s particular language to take advantage of each computer’s unique capabilities and restrictions. Unless a moth flew into the hardware¹, the full context of the computer while running the program was predictable. Mainframes diversified, computing languages abstracted, and programs came to be written for multiple hardware platforms. The software execution environment could no longer be fully predicted, but the variations between computing execution environments remained relatively constrained. The emergence of desktop computing created an explosion of change. Many new hardware manufacturers emerged, making both new computers and various accessories to extend their capabilities. Peripherals all spoke different hardware languages and operating systems relied on the hardware manufacturers to create functioning drivers to enable their devices to work on consumers’ ¹ The historical first bug, found in 1947. See http://www.computerhistory.org/tdih/September/9/. systems. For some years, computing saw rampant device driver incompatibilities. When a consumer purchased desktop printing software, the consumer needed to check if their printer was on the list of compatible printers for that program. It was hard to predict the execution environment for software, and strange errors occurred (e.g. a program might not run if a peripheral not even used by that program was attached to the system). Then standards emerged and the operating systems with consumer-market dominance took over the bulk of interfacing to hardware (C. Kaner, pers. comm.). Most hardware manufacturers implemented exactly that interface, most software manufacturers programmed for only that interface, and the operating system passed information across the border. Execution environments for programs became largely predictable again, and problems related to environmental conditions became a relatively small aspect of software testing. The emergence of mobile computing created another explosion of change, several magnitudes larger. There is now a very rapid change cycle for hardware, operating systems, and applications. Rather than years, merely months pass between major changes in each category. A few consumers change quickly to the most recent everything; many lag a few revisions behind state of the art (Android Open Source Project 2018); and a few still run systems others have consigned to museums (Google Tech Talks 2013). Just that aspect of environmental conditions is wildly variable for mobile apps. The mere aspect of mobility adds to the unpredictability of environmental conditions while a program runs. As they move about the world, mobile devices change their network connectivity methods and parameters, and they are impacted by changes in the geography and weather conditions around them. Anecdotally, network connectivity around skyscrapers and device stability in extreme temperatures are both problems. Exacerbating the unpredictability problem, mobile devices consume input from a wide variety of sensors that previously were specialty equipment but now are commonplace, like magnetometers and accelerometers. Applications are expected to respond to a great deal of that data, and sometimes the mobile operating system imposes the new conditions on the programs whether the programs handle them gracefully or not (e.g. screen orientation changes, dimming the screen in response to changes in ambient light, replacing the active network connection details at any moment, etc.). These types of environmental unpredictability cannot be solved by standardization of interfaces; they are inherent aspects of mobile computing. IoT software shares many of the environmental characteristics of mobile computing, especially the deep reliance on embedded sensors. This history shows that mobile and IoT apps operate within a fundamentally different scale and scope of environmental unpredictability than programs in any prior computing era. To be effective, automated software testing tools need to address this unpredictability directly. 3 Release-Readiness Levels Framework When I worked as a practitioner, specific tacit goals underpinned my software testing activities. As I met other senior testers and discussed experiences with them, I found resonance in how we approached our software testing efforts. We all wrestled with the question of judging when software was performing well enough that we could recommend its release. Our project managers made the release decisions, but they expected us to render professional judgments and supporting evidence to inform those decisions. We lacked a good framework for discussing the options. Historically, the ISO/IEC/IEEE 29119 standards and their predecessors have not been entirely helpful. Guidance on managing the testing process or doing requirements traceability does not directly answer the core problem: How to judge if the software will perform adequately in the field. There is tension between how much testing is feasible to do and how much productive information about the range of field conditions is needed to enable sound decision-making. --- 2 http://softwaretestingstandard.org/ My Release-Readiness Levels Framework (Figure 1) captures one perspective on how to assess that tension and how to discuss what additional testing might be desirable at any point in the project. Underlying this model is the concept of exposing the software to increasingly difficult challenges, thereby increasing knowledge about the variety of conditions in which the software will perform acceptably. Each level in the Release-Readiness Levels Framework applies to individual features, communicating feature sets, and to the program as an integrated whole. The further testing goes through the levels, the further confidence develops that the software will behave desirably in a wider variety of circumstances. Note that this is a map of the testing possibilities – not a statement of required steps. The appropriate point at which to release the software in question is a Project Management decision and varies for each program and often for each release. 3.1 Level 0: Dev Testing Level 0 testing occurs during Development Focus, and its goal is to assess whether the intended functionality has been implemented. Characteristically, very small aspects of program behavior are analyzed separately from all other behaviors of the program. In modern software development, Dev Testing consists mostly of unit tests, but this is also where style checkers and other static code analyzers are usually applied. Software released at Level 0 (Dev Testing) may contain missing, partly-functional, and incomplete features. It may not install cleanly into any environment other than the Development Environment, and it may not work as a cohesive whole even if it installs cleanly. Professional software development shops usually test further than this before production release. 3.2 Level 1: Ready to Test The goal at Level 1 is to assess whether the installed software is ready to be challenged by Testing Focus work. Characteristically, functionality is tested only surficially, just enough to show whether the program crashes with trivial ease or permits access to all features of current interest; it is common in the early stages of a project for features to be missing or known to be not ready for Testing Focus yet. Specific behaviors checked at Ready to Test tend to focus upon whether the install process worked correctly and upon the basic stability of features to be tested. Happy-path scenarios in which the software is used exactly as intended are checked, along with the most common or most predictable error-path scenarios. Software released at Level 1 (Ready to Test) should install properly into expected operating environments but may break if used even slightly differently than anticipated, whether by user action, data state, or an operating environment that was not tested. 3.3 Level 2: Closely Controlled The goal at Level 2 is to assess whether features work when challenged by tests with carefully controlled parameters. Characteristically, test data is hardcoded but may occasionally be drawn from small lists. Testing activities consist of challenging the program’s assumptions in various ways, subjecting it to atypical usage patterns, data that is potentially hard to process, easily-triggered system and environmental interruptions, etc. Software released at Level 2 (Closely Controlled) is likely to suffer many field failures on systems that differ from Development and Testing Environments; it may also break if used in unanticipated ways or if used on a continuous basis for some block of time, as characteristically tests at this level consist of very brief runs of the app to test a specific behavior, followed by resetting the app to a clean state before testing the next specific behavior. Such run intervals provide clear boundaries between tests, making it easy to identify which test failed, but these runs are markedly shorter than most scenarios in which users will actually use the app. 3.4 Level 3: Predictable Variations The goal at Level 3 is to assess whether features of the program continue to work when test conditions are significantly loosened and many predictable variations in those parameters are exercised. Test cases at this level take different values at each execution, drawing specifics from large sets rather than fixed details or small lists of possibilities. Characteristically, Predictable Variations tests intensely vary one or a few variables while holding others constant, as the knowledge sought is about how those specific variations alter the program’s response. Software released at Level 3 (Predictable Variations) is markedly more stable in a diverse variety of conditions than software released at Level 2 (Closely Controlled), but it may still experience a moderate number of field failures. These stem largely from unexpected patterns of use and from untested environmental and data conditions. The more aspects of environmental unpredictability not tested, the higher the likelihood of finding many problems after release. 3.5 Level 4: Discovers Unexpected The goal at Level 4 is for testing to uncover problems that no one on the project could predict, sometimes that no one on the project could imagine. Experienced practitioners regularly exchange tales of finding bugs like this, often accidentally, because modern programs are complex and contain many tacit dependencies within themselves, with their operating environment, and with their data. In various ways, a program and its operating environment tested at this level are placed under stress. Many of the test techniques for finding these unpredictable bugs consist of HIVAT (High Volume Automated Testing) techniques, such as long-sequence tests, long-duration tests, and random variations of structured input (i.e. fuzzing). Characteristically, the scale of tests increases dramatically at this level. HIVAT techniques are intended to efficiently run hundreds of thousands, millions, or scrillions of specific tests. Senior technical testers in the practitioner community have created HiVAT testing efforts for decades, but the approach typically requires programming expertise and these techniques are not widespread in general practice. Software released at Level 4 (Discovers Unexpected) tends to experience some field failures as combinations of conditions are discovered in the field that were not reached by intensive testing, but these failures are rarer than at earlier levels. 3.6 Level 5: Exhausted Capacities The goal at Level 5 is for testing efforts to exhaust their capacities, given the capabilities of the suite of tools and techniques and the time available. Stresses upon the software are dramatically increased, sampling scope approaches closer to exhaustive, and system testing increases the variations in exercising interleaved, cooperating, and coexisting features. This level of testing intensity is typically done when the risk of failure is extremely high (e.g. data-processing logic that could corrupt the database if done wrong, life-support systems, high-impact infrastructure systems, etc.). Possibly the best-known example of Level 5 (Exhausted Capacities) testing is the preparations made for Y2K as computing approached the year 2000. Computing professionals not involved in the Y2K transition often lack understanding of the details. The problem was caused by a vast quantity of legacy code that assumed all years fit the format 19xx, and so stored only the last two digits to distinguish the year; the imminent arrival of the year 2000 meant all that code was going to break and comparisons for which data was older or newer would be incorrect unless four digits were compared. Surficially, the answer appears to be as simple as just replacing two-digit storage with four-digit storage. However, the reality was a great deal more complex than that. Many companies attempted to patch their legacy software only to find that every patch spawned multiple new problems; the harder they tried to fix their code, the less functional their software became. In the United States, many banks sold themselves to the few banks with Y2K-compliant software, and antitrust regulators approved the sales because the financial infrastructure of the country had to continue to work reliably (C. Kaner, pers. comm.). Problems riddled software at all levels, and companies in a vast array of disciplines created entire duplicate computing systems and networks so they could advance the date in this test environment and see what would break and what fixes actually worked. That degree of duplicate computing hardware alone cost an exorbitant amount (easily millions or billions or more, worldwide), and very significant human time was invested in testing and retesting critical systems. Y2K was a worldwide test effort that covered years. Popular understanding is that Y2K was a lot of noise made about nothing, because when the clock turned over, only very minor things broke; the reality is that Y2K was an incredibly successful, Level 5 testing effort that exhausted the capacities of the people, technology, and time invested in it. Software released at Level 5 (Exhausted Capacities) experiences the fewest possible field failures, but some failures remain possible. 3.7 Release-Readiness Levels Recap To properly assess whether mobile and IoT software will perform adequately in their field conditions of massive environmental unpredictability, software testing efforts need to include tests at Level 3 (Predictable Variations) and Level 4 (Discovers Unexpected). Some critical behaviors within many apps likely deserve tests at Level 5 (Exhausted Capacities); in some cases, whole software systems may merit that level of testing (e.g. autonomous vehicles). So, what levels of testing are enabled by our current software testing technologies and tools? 4 Core Software Testing Technologies The process of software testing involves creating tests, executing them, and evaluating what happened as a result. Whether manual or automated, executing a test relies on specific technology to make it function. These software testing technologies are mechanisms for exercising a program in directed ways or for obtaining information about the software’s behavior. They are the keys which enable different test methods, the gateways which define the character of what kinds of tests are possible. I identified seven core testing technologies present in today’s readily available tools and the academic research literature: 1. Physical devices 2. Virtual devices (emulators and simulators) 3. Simulation environments 4. Mechanisms for interacting with the Visual GUI Tree 5. Image-comparison 6. Code manipulation (e.g. xUnit, code instrumentation, etc.) 7. System monitoring Each technology enables certain kinds of tests and is better-suited to some types of investigations than others. Existing tools and testing approaches fulfill a technology’s potential to differing degrees, and each implementation of a technology may be evaluated in terms of how well it handles the environmental unpredictability characteristic of mobile and IoT software. 4.1 Physical Devices Testing on physical devices is the baseline testing technology. When programs were written for just one computer, testing the program meant running it to see if it worked, at least well enough for the intended purpose at that time. Beginning programming students do the same thing today. As computers diversified, software testing on physical devices expanded to include compatibility assessments between the program and different execution environments. In the desktop computing era, software development companies typically had test labs containing a variety of supported equipment. Companies could buy representative hardware for all their major supported systems and expect to use those machines for years; significantly new systems came out every few years when chipsets changed, and the operating systems updated every few years as well. In the mobile era, the comprehensive in-house test lab has ceased to be feasible. Instead of dozens of distinct hardware platforms, there are thousands in use worldwide (OpenSignal 2015). New device models appear at least once a year from most device manufacturers, and sometimes more frequently than that; operating systems receive frequent major updates (sometimes several times per year) and nearly continuous minor updates. The scale and pace of mobile environment changes make maintaining traditional in-house test labs of physical devices for all in-scope systems prohibitively expensive. Thus, the environmental unpredictability problem mobile software testing faces for this technology is simply access to a diverse-enough collection of physical devices. 4.1.1 Accessing Sufficient Diversity One strategy is to test only on the top 10-15 devices in use by a mobile app’s target users, broken into strata of high-end devices, mid-range devices, and low-end devices; which devices belong in which grouping changes very quickly, as new devices are released. This is an example of Level 2 (Closely Controlled) testing because the test data varied here is a small list (10-15 devices). Despite its limitations, this strategy should work adequately for reasonably homogeneous target populations; as the diversity of target users increases, more of their systems will not be represented in this sampling strategy. Arguably, the most common option in modern app development is cloud-based testing. Fundamentally, these cloud services leverage existing software testing technologies to provide networked manual or automated access to physical devices hosted and maintained by the cloud service provider; various providers exist and exactly who is in business changes over time. Because so little can be inferred about the stability of software on other devices based upon its behavior on one device, mere replication of tests across many different mobile devices is not very informative. Tens or even a few hundreds of devices tested is still a small fraction of the possible thousands of devices that a mobile app could be deployed to, so it is difficult for mere replication across devices to lift testing beyond Level 2 (Closely Controlled). Further restricting the power of cloud-based device testing, the kinds of tests that can be performed on such devices are bounded by the testing interfaces provided by the service, and that limits their power. 4.1.2 Technology Strengths and Limitations The greatest strength of testing on physical devices is trustworthy realism about how apps will behave on that device. Although that observation seems obvious, fidelity to reality is a great enough issue in mobile app testing that the point is emphasized by academics and practitioners alike (Delamaro, Vincenzi, and Maldonado 2006; Ridene and Barbier 2011; Muccini, Di Francesco, and Esposito 2012; Nagowah and Sowamber 2012; Gao et al. 2014; Vilkomir and Amstutz 2014; Knott 2015). One of the motivators for the emphasis on realism is the mobile fragmentation problem. Mobile hardware varies greatly in chipsets; screen size, display resolution and pixel density; number and types of network interfaces, sensors, and embedded devices (e.g. camera, speakers); quantity of working memory and local storage space; and battery performance. The current state-of-the-art solution to this vast variety is to test on a sufficient number of real devices, where “sufficient” is interpreted individually by every software publisher. Some limitations apply to using physical devices as a testing technology. Mobile devices are consumer devices, and (Google Tech Talks 2013) points out that consumer devices are not designed to run 24x7, so hardware used for extensive testing will fail after a few months and need replacing. 4.1.3 Implementation Limitations Another pragmatic limitation on the number and type of tests executed remains the financial cost of testing. Even though cloud-based testing services have dramatically lowered the costs to software producers of provisioning and maintaining physical devices, these services still cost money. Budget and service constraints limit the testing minutes purchased for a project, forcing tradeoffs in what is tested and how extensively. Cloud-based pools of physical devices vastly improved access to a variety of devices, but there is little variety in the physical or network environments of server rooms, providing little opportunity to exercise the embedded sensors and other equipment in cloud-based mobile devices. Examining these aspects of mobile devices in any detail still requires physical access to the appropriate device, and the common solution is for individuals to move around the world, interacting with each device one at a time. The obvious problems in scaling this approach explain the appeal of crowdsourcing testing, but that approach lends itself to haphazard data collection and great challenges in repeatability. Again, cloud-based testing is limited in its possibilities by the testing interfaces that the service providers support. Generally, it is not possible to extend or replace a provider’s testing framework, so tests are constrained to the types and design styles the provided frameworks support. --- 3 The practice of outsourcing testing to a crowd of people, mostly without any technical training, in an attempt to obtain useful feedback about how the software will perform for real users (Knott 2015, 141–45). 4.1.4 Physical Devices Technology Summary Testing on physical devices yields highly trustworthy results but also incurs test-management overhead costs that can be significant, somewhat limiting scale. Cloud-based providers close the gap in trying to examine the effects of all the combinations of hardware and operating system versions, but they do so in a server room environment that provides little scope for exercising the embedded sensors and other equipment in mobile devices. A robust solution to scaling testing that exercises embedded sensors and equipment is either not yet invented or not yet widely available. The types and design styles of tests run on cloud-based devices are limited to what the provider's frameworks support; support for custom testing needs is quite rare. 4.2 Virtual Devices Virtual devices leverage readily-available and affordable hardware to mimic the specific hardware and operating environment of less easily obtained systems. The manual analog is the collection of techniques involved in desk-checking, where a human reads through the code, pretending to be the computer, analyzing how the program is going to run. Computer-based virtual environments appear to have their roots in the first time-sharing operating systems of the 1960s (Creasy 1981; Pettus 1999). Over the next several decades, computing professionals developed a decided preference for emulators over simulators. Simulators ran faster, but they characteristically used the full resources of the host computer to perform the computing tasks. Emulators created representations of the foreign hardware within the host computer, using just the resources the real device would have available and using a virtual chipset to execute the foreign program at the binary or system-call level. Testing on simulators gave an approximation of how the program would behave on its target computer, but the exact fidelity in representing the emulated device created great trust in the results obtained while using the emulator. At the beginning of the mobile era, devices were especially resource-constrained, making it difficult to run testing software on the physical devices. Vendors addressed this problem by providing virtual devices to enable testing (Satoh 2003). Although traditional programs tend to be developed on their target platforms, mobile apps are developed on desktop computers and deployed to mobile devices later. Consequently, virtual devices are everyday tools during mobile app development today. 4.2.1 Technology Strengths and Limitations The primary strengths of virtual devices for mobile apps today are enabling development tasks on non-mobile systems and the ease of switching between devices. The practical matter of cost savings is another strength inherent in the technology, as the actual physical devices are not a factor. The history of simulators and emulators from the desktop era has shaped expectations about them in the mobile era. When the term "emulator" is applied (e.g. the Android Emulator), the implication is that it will provide the exact fidelity trustworthiness of the prior era. Unfortunately, there are significant limitations to the virtual devices representing mobile devices. Although the mobile CPU, OS version, RAM, and screen size, resolution, and pixel density are virtually represented, it is well-understood that sensors and other embedded hardware commonly are not. Functionality involving the camera, GPS, accelerometer, microphone, speakers, gyroscope, or compass; sensors for ambient light, pressure, temperature, humidity, or proximity; and network connections like Bluetooth, Wi-Fi, NFC (Near Field Communication), and cellular communication for 3G, 4G, etc. – all these are difficult or impossible to test on virtual devices, especially those provided by the platform vendors (Muccini, Di Francesco, and Esposito 2012; Griebe and Gruhn 2014; Knott 2015, 115). It is not clear that including all these embedded components in a virtual device would even be useful; most of these are pure input mechanisms, not manipulated by either the user or the program. Merely representing them as pieces of hardware in a virtual device is not enough to make them usable; a different technology is required. 4.2.2 Implementation Limitations The Android ecosystem faces another set of serious limitations, stemming from its open-source nature. Microsoft controls their operating system, and Apple controls both their hardware and software, so these ecosystems may not have the same problem. However, the Android OS is commonly customized by both hardware manufacturers and cell service providers. Some of this customization is required to make Android work on the equipment; other aspects are customized for user experience, and sometimes these changes are very significant functionality differences. Complicating matters, most of these customizations are proprietary and therefore not publicly documented. This means the best the Android Emulator can do after setting up the partial-hardware representation of a device is layer on a stock version of Android OS. Seen in that light, it is not just that features to send context data to the virtual device are awkward to use or unavailable, it is that testing on a virtual device lies about how well things will work in the field. The virtual devices are idealized deployment environments that incompletely and inaccurately represent the real devices. This limits the power of testing on Virtual Devices to Level 1 (Ready to Test) because the information obtained is only a ghost of reality. This explains why experienced practitioners strongly advise testing primarily on physical devices in real environments (Knott 2015, 3–4), limiting the use—if any—of virtual devices to only the most basic, simplistic tests (Kohl 2012, 278–79; Knott 2015, 52). 4.2.3 Virtual Devices Technology Summary Testing on virtual devices enables effective development of mobile apps, since development is done on different systems than where the software is deployed. However, all mobile virtual devices lack completeness; significant hardware components are routinely omitted, and Virtual Device technology alone is insufficient to exercise the sensors and other embedded equipment even if they were included. The Android Emulator diverges even further from exact fidelity in representing a physical device because it can only apply a stock version of the Android operating system, absent all proprietary customizations made by hardware manufacturers or cell service providers. 4.3 Simulation Environments Simulation environments take simulation beyond the device level, instead creating virtual representations of various aspects of the world surrounding the mobile device. Academic work has mostly studied networked communication scenarios and a few richer environmental context simulations. There is little discussion in the practitioner literature about simulation environments being used for mobile software testing. This suggests that good tools for useful mobile simulation environments are not generally accessible to practitioners. 4.3.1 Technology Strengths and Limitations Simulation environments bring realistic deployment scenarios into a controlled lab environment, allowing intensive variation of specific aspects of interest without the expense of travelling to the correct conditions or waiting long enough for the correct conditions to occur. Theoretically, this should enable testing at Level 3 (Predictable Variations), Level 4 (Discovers Unexpected), and Level 5 (Exhausted Capacities). Most applications of simulation environments also involve sophisticated modeling of the conditions of interest, to ensure the simulation meaningfully represents real conditions. For example, performance testing of websites typically involves estimates mixing percentages of different representative types of users so that realistic traffic and session loads can be generated. This complexity tends to isolate use of simulation tools (like those for performance testing websites) to only a few, highly-specialized testers brought in as consultants when a company has a specific need. Simulation environments are rarely used – if at all – by most practitioner software testers. 4.3.2 Implementation Limitations The primary limitation of simulation environments applied to mobile testing today is that there are so few instances exploring it. A secondary limitation is that existing tools focus on testing user-generated events in short bursts but do not enable testing extended usage scenarios (an hour or more) (Meads, Naderi, and Warren 2015); this means that the simulation environments are not representative of reality in terms of how users will use the apps, the devices, the network resources, etc. Furthermore, the focus on user-generated events means little testing is being triggered by sensor readings or background services, despite those being extremely common vectors of input to modern mobile and IoT devices. 4.3.3 Simulation Environments Summary Simulation environments contain rich potential to assist in testing mobile and IoT devices, but their potential is not well-addressed at this time. Academic work investigates networked communication scenarios and some environmental context simulation, but the scope of the tests is limited overall. Practitioners seem to have no useful access to tools for applying simulation environments to testing mobile apps. 4.4 Visual GUI Tree Mechanisms for interacting with the Visual GUI Tree form arguably the most commonly used GUI application testing technology in use today. The Visual GUI Tree is the hierarchical arrangement of objects forming the rendered display of a modern GUI. All the currently-visible objects on the screen (e.g. buttons, images, etc.) are part of it, but so are many invisible objects that control how objects on the screen are placed (e.g. layout managers which arrange contained objects in a row horizontally, vertically, or in a grid). Assistive technologies like screen readers for blind users leverage the Visual GUI Tree, so it has wide availability on desktop systems, in web browsers, and on mobile devices. To encode a test accessing the Visual GUI Tree, some absolute identification of the object in question is required. The earliest interaction mechanisms identified objects based on their coordinates on the screen, and that remains a common fallback mechanism today. Some interaction mechanisms scrape the screen to locate specific text, which is then used to identify the object of interest; this allows the text to appear at different coordinates and not break the encoded test. The most reliable means of accessing a specific object is to use a distinctive attribute of that object, commonly some kind of identifier or xname value. If such a unique identifier is not available, then the object’s XPath location in the Visual Tree (i.e. the path of elements that must be traversed to reach the desired element) may be used to navigate precisely to the desired object. Once a desired GUI object is located, testing frameworks interact with it as an object, entering text or sending click commands; more mobile-centric frameworks provide gesture commands, such as swiping the screen in a particular direction. Tools using the Visual GUI Tree are prolific and widely used today, as this is an excellent technology for replicating user interactions with a GUI. The vendor-provided automation tools for each mobile platform use this technology, as do GUI Record and Replay tools. All the cloud-based device services primarily leverage the Visual GUI Tree to replicate user interactions. Most of the tests implemented in Visual GUI Tree tools are Level 2 (Closely Controlled) tests, restricted to specific data hardcoded into each test, repeated verbatim with every test run. This level of detail is the baseline scenario for automated testing. Unfortunately, it is rare for these tools to offer features to parameterize test case data, much less to determine it programmatically at runtime, features that are required to enable tests at Level 3 (Predictable Variations). Whether customizations to extend the power of tests is possible in Visual GUI Tree tools usually depends on how flexible a programming language the tool uses as its test scripting language. The need to program scripts in a more sophisticated manner is at odds with the marketing of these tools, much of which focuses on enabling testing by non-programmers, so features enabling coding flexibility are not a high-priority for many tool-makers. In recent years, an increasing number of testing services have advertised pre-packaged test suites designed to test any app without human effort. These typically install the software and apply Random Monkey testing to the GUI, looking for crashes. Anecdotally, these generic test suites are finding bugs in many mobile apps, but they necessarily cannot be checking substantial functionality of any of them because no generic test suite can know anything about the functionality of any program. Consequently, these tests operate at Level 1 (Ready to Test). 4.4.1 Technology Strengths and Limitations Ever since GUIs became widely-implemented using objects, interacting with those objects by accessing the Visual GUI Tree has become widespread across desktop, web, and mobile applications. It is an excellent way to automate interaction with a GUI in a manner that replicates how humans interact with it, through exactly the same objects accessed in almost the same ways. When the Visual GUI Tree works, it works very well, but it cannot work when it cannot uniquely identify an object. (Chang, Yeh, and Robert C. Miller 2010) note that sometimes GUI objects are not given unique identifier attributes. This happens rather frequently in practice, because developers often have no need of such identifiers to uniquely access the object; consequently, the identifiers required by Visual GUI Tree testing tools may only be added by special request to accommodate testers. As a matter of human process, this is not a reliable system; it is tedious work for the developers and adds clutter to their code. When unique identifier attributes are unavailable, Visual GUI Tree tools fall back to XPath navigational routes to objects or to specific coordinates on the screen. Neither of these works with variable GUI elements – items that may change position in a list as more data is added to the application, or items that are programatically populated into a display template based on runtime selection filters. Consider a kitchen pantry app that is asked to display only the perishable items expiring in the next week. Each of those items is selected for display at runtime based on the state of the database and the value of the timestamp when the query is issued; in mobile apps, these are typically displayed as individual GUI objects, but there is no reliable identifying attribute that can be assigned to them and expected to persist across repeated queries. Sometimes identifiers are dynamically created for such elements while the program runs, but new values are dynamically created the next time the program runs, rendering scripting a repeatable test impossible. There is also no guarantee that each type of item will be unique (how would one distinguish between five different bananas?), and elements may not be populated into the display in the same order every time. The identification information simply is not naturally present to set up repeatable tests that will succeed across runs of the application. Automating tests in these situations quickly becomes more a matter of coordinating and maintaining test data (self-verifying records or known-state of the whole database) than about testing the behavior of the app. The Visual GUI Tree also does not handle any case where one GUI element combines several different behaviors within it, determined by position of interaction on the element rather than by its object identity. Image maps on web pages are good examples, as clicking on different parts of the image triggers different actions. Many mobile games have visual control screens that surficially resemble image maps, where things the user can choose to do are carefully integrated into an overall image. The Visual GUI Tree is excellent technology for testing behaviors triggered by user actions, but not sufficient for testing behaviors not triggered by user actions. Context-dependent inputs from various sensors and most error scenarios need another testing technology applied before they manifest, because they result from something else happening while the user is using the app. 4.4.2 Implementation Limitations On the one hand, using the Visual GUI Tree for testing is a mature technology that has been well-developed across several generations of computing platforms, with well-understood recurrent difficulties and well-known solutions to those difficulties. On the other hand, certain usage limitations have never been robustly supported. Examples from desktop applications (Chisholm 1997; Nyman 1999; Borjesson and Feldt 2012; Bauersfeld and Vos 2014) and web applications (Stocco et al. 2014) note that custom objects are not handled well in Visual GUI Tree tools. Custom objects derive from established objects but behave differently, often using the technology in non-standard ways (e.g. a list element that contains different background colors for each list item but no text, making a color-picker). The testing tools that rely on the Visual GUI Tree only know about the standard GUI widgets; they cannot know about the custom-created ones, and few tools enable programming extensions to the tool to allow accurate processing of custom GUI widgets. In a similar vein, (Nyman 1999) and (Nguyen et al. 2014) document difficulties in handling GUI components with context-dependent text strings, such as windows titled after documents or error messages with non-constant details. Sometimes the record-and-replay usage scenario for Visual GUI Tree tools is blamed for the lack of features to handle this need, as the naïve understanding of record-and-replay expects exact repetition of all details and does not imagine tuning the recorded script to be more powerful and to handle variations fitting specific patterns. (Chang, Yeh, and Rob Miller 2011) point out that some textual data that is read programmatically from object attributes may not be formatted the same way as the text string is displayed on screen. Date and time stamps are particularly likely to diverge. Where the discrepancy matters, this ought to be easily addressed with a little code during Testing Focus work. However, this faces the same difficulty as handling custom GUI widgets and handling parameterized scripts: Much of the marketing around these tools focuses on enabling testing by non-programmers, so features enabling coding flexibility are not high-priority for the tool-makers. (Gronli and Ghinea 2016) note that mobile apps reliant on animations and clock-based display changes can exhibit erratic failures when tests are replayed, due to timing issues that render some GUI widgets inaccessible or not present when the test script expects to find them. This is something of a surprise, as the need for a “wait for element” feature is well-established in the history of using these tools prior to the mobile era; however, (Google Tech Talks 2013) suggests that new instantiations of using this technology may be independently rediscovering this need rather than considering it essential from the beginning. 4.4.3 Visual GUI Tree Technology Summary The Visual GUI Tree is both an excellent and dominant technology for testing mobile apps, but the technology suffers from a persistent lack of capacity to handle custom objects and display patterns that vary in specific details, such as error messages. These limitations are exacerbated by variable GUI elements, which are becoming more common in mobile apps because it saves memory to not instantiate elements not visible on the screen; typically populated as list elements at runtime, these elements commonly lack unique identifiers, reliable paths to their location in the GUI Tree, and unique text contents. Another testing technology is required to test behaviors that result from context-data input or most error conditions, as these situations are not triggered by user actions. 4.5 Image Comparison Image comparison tools take a snapshot of the rendered display of a running program then compare that snapshot to a reference image previously captured. Either whole-image or part-image comparison may be done. The manual version requires humans to visually compare images to a reference standard. Testing tools at least as far back as the 1980s used whole-image comparison techniques (C. Kaner, pers. comm.), doing pixel-by-pixel comparisons of whole windows. These tools were fragile (every tiny change invalidated the reference image, even if an element just shifted left by one pixel) and fell out of general use, but there has been a resurgence in their use for testing mobile apps, where they are used to compare an entire screen's content to a baseline image (Knott 2015, 109). The second generation of this technology is part-image comparison, which analyzes the snapshot of a running program to see if it contains the reference image somewhere within the whole picture. This is much more resilient to changes in the GUI layout. (Potter 1993)'s demo tool Triggers is commonly cited as the first work of this kind, although (Horowitz and Singhera 1993) used bitmap part-images in their fully-implemented GUI testing tool XTester, which went far beyond a demo implementation of the idea. However, comparison of part-images did not occur widely until the introduction of Sikuli in 2009; the technology required advances in computer vision algorithms and in hardware capacities to become practical (Yeh, Chang, and Robert C. Miller 2009). Sikuli emerged from work seeking to help users confused by a GUI icon find help documentation without needing to guess the icon’s text name. (Chang, Yeh, and Robert C. Miller 2010) applied Sikuli specifically to automated testing, focusing primarily on the benefits of unit tests established to check the implementation of Visual-Meaning behaviors, such as the Play button on a video player changing images from the Play triangle to the Pause bars after the Play button was clicked; their scope limited itself to checking the GUI layer only, without any underlying connections to application functionality, so in this example nothing would be known about whether or not the video player could successfully show recorded video. 4.5.1 Technology Strengths and Limitations Part-image comparisons are significantly better-suited for testing Visual-Meaning situations than tests that try to use the Visual GUI Tree for this purpose. (Groder 10/26/2016) contains an enlightening example test case in which the image searched for on-screen is that of Lake Superior rather than of a traditional GUI element object. The picture of Lake Superior is a more accurate indicator of the intended content than any other; if the image claims to be of Lake Superior but actually depicts an elephant, tools using the Visual GUI Tree data will not notice the problem, but image-comparison technology should. However, searching the whole screen for part-image matches is a computationally expensive operation. This means these tests run slowly, which limits how many are worth running. Another problem with using image-comparison for tests is that the technology can only deal with what is currently visible on the screen – if the target is scrolled off the visible portion of the screen or occluded by something else, image comparison cannot find it (Chang, Yeh, and Rob Miller 2011; Groder 10/26/2016). Scrolling may alleviate this problem, but anything that cannot currently be seen cannot be tested. 4.5.2 Implementation Limitations The current set of image-comparison tools is hampered in effectiveness by several technology problems that may be addressable with improved code and algorithms. One impact of these is that the tools do not cope gracefully with differences in image size, color, or style (St. Amant et al. 2000; Chang, Yeh, and Rob Miller 2011; Borjesson and Feldt 2012; Groder 10/26/2016; Gronli and Ghinea 2016; Garousi et al. 2017); Sikuli was designed to address these variations (Yeh, Chang, and Robert C. Miller 2009), but in practice it --- Notice how you had to pause and translate those word-descriptions into the pictures? That's the essence of a Visual-Meaning situation: The picture conveys the meaning directly and anything else just points to the intended meaning through a stand-in representation. is not functioning well enough. Practitioners relying on image comparison tests also have to rely on workarounds they need to implement themselves to get around tools that can only recognize one baseline image per screen; this implies that every device tested requires its own library of baseline images (Austin Test Automation Bazaar, pers. comm.). This means tests relying on this data are at best Level 2 (Closely Controlled). Some commercial testing tools, like Eggplant, are encouraging companies to replace their Visual GUI Tree test suites with Image Comparison test suites, selling this next-generation of tools as based on how users interact with applications rather than on "outdated" code-centric testing approaches. What that pitch does not make clear is that the resulting test scripts require collecting and maintaining a mammoth library of GUI element images, because images need to be collected for each individual element in all its states (selected, enabled, disabled), in every language, in every platform look-and-feel, in various sizes, etc. Collecting and maintaining all that test data is a significant and continuing operational cost, consuming time and energy testers could otherwise spend on testing the app instead of maintaining test data. Image Comparison is also a poor technique for text recognition. (Chang, Yeh, and Rob Miller 2011) points out that existing OCR techniques are designed for printed pages; thus, they expect white backgrounds, high-resolution text, and rectilinear layouts. Yet screen images may be significantly lower resolution than paper, contain colorful backgrounds, and be laid out much less predictably — all details which challenge the success of text-recognition via images. (Borjesson and Feldt 2012) observed a 50% failure rate in Sikuli’s ability to distinguish between the single characters ‘6’ and ‘C’. Lastly, the existing tools poorly handle moving image data, whether in GUI animations or in extracting data from video clips (Borjesson and Feldt 2012; Chang, Yeh, and Robert C. Miller 2010). It is not clear how much this is a limit of the technology, how much a limit of the computer vision algorithms, and how much a limit of the hardware processing resources in use at this time. However, mobile platforms seem to be standardizing on applying more animation to GUI elements for interface usability reasons (Tissoires and Conversy 2011), so this limitation to the tools is a significant challenge. 4.5.3 Image Comparison Technology Summary Computer vision technology has enabled the ability to test for Visual Meaning problems, which is a great advance, but even part-image comparison is not suited for all purposes. The approach fundamentally cannot address functionality not visible on the screen; as such, it cannot trigger tests for incoming context data nor for most error conditions. Large libraries of images are currently required to apply the tools across platforms and different display characteristics, and searching for images on the screen slows the tests compared to looking for an identifying attribute via the Visual GUI Tree. Image-comparison of text is error-prone because OCR algorithms are designed for a much higher resolution and sharper contrast scenario. Image Comparison Technology is an excellent complement to Visual GUI Tree Technology, but it is not suited to be a complete replacement of it. 4.6 Code Manipulation This large category encompasses many types of code analysis and modification. All the techniques utilize some insight into the code’s details to enable testing, and all require programming. In modern software development, arguably the most commonly used of these techniques is the xUnit framework. Primary use of xUnit tests is by developers during code creation and code maintenance activities; tiny, specific, very quick execution unit tests are the backbone of most Continuous Integration systems, serving as quick-feedback flags if code changes break the build (Beck 1999, 59–60). These --- 6 This sales tactic is afflicting web-based tests also. (Stocco et al. 2014) is about a tool to automatically transform DOM-based tests to image-comparison ones. DOM (Document Object Model) is the object-based representation of a webpage. 7 Optical Character Recognition kinds of unit tests form a very important foundation for stabilizing code before it passes from Development Focus to Testing Focus. Other applications of Code Manipulation Technology include code coverage tools, which report the percentage of the lines of source code executed during a test suite run; code style-checkers and other static code-inspection checkers; and a variety of approaches that directly modify that app code to create mock responses from external resources, primarily network accesses. At the extreme end of Code Manipulation approaches are cases of modifying the Android distribution itself to enable functionality required for testing purposes; it is unlikely that practitioners are going to these lengths. 4.6.1 Technology Strengths and Limitations The scope and variety of Code Manipulation testing approaches is vast. Essentially, anything can be implemented, so long as it can be coded. Human capacity to address the problem is the limiting factor, not the testing technology. The cost, though, is that some of the knowledge required is deeply technical, and people with that knowledge are often in very senior development positions instead of in testing positions. 4.6.2 Implementation Limitations More significant limitations on Code Manipulation testing approaches come from the mobile security policies on the devices, which significantly restrict inter-application communications relative to desktop behaviors. (Amalfitano et al. 2015) notes that every Android app (e.g. an app under test and a test tool app) runs under its own user identity rather than sharing the same identity with other applications launched by the user of the computer, as is common in traditional desktops. This leads to a need to set up the test tool to run in the same process as the app under test, rather than independently of it, which demands a tight integration between the tool and the app’s source code (Takala, Katara, and Harty 2011; Amalfitano et al. 2012). Different apps can communicate only through the features provided in Android’s Binder inter-process communication system (H. van der Merwe, B. van der Merwe, and Visser 2012). Binder calls in turn require global JNI\(^8\) references (Yan, Yang, and Rountev 2013), and JNI is Android’s integration mechanism with C/C++ libraries and code. As a further difficulty, Android’s test automation frameworks limit tests to interacting with one Activity\(^9\) (Kropp and Morales 2010), meaning that these built-in test mechanisms cannot be used to automate tests that flow between Activities – roughly equivalent to the different visible pages a user sees while navigating through an app. 4.6.3 Code Manipulation Technology Summary Code Manipulation is an extremely versatile testing technology, capable of testing at all Release-Readiness Levels, with the caveat that it can grow extremely complicated. xUnit frameworks are in widespread use in industry, as are code coverage tools and static analysis style checkers. These are all used primarily while code is in Development Focus. In academia (and perhaps in some highly technical development shops), code is being modified to provide mock responders for external resource requests, primarily network accesses; this is a step towards Simulation Environments. The extreme complexity end of Code Manipulation involves researchers creating customized versions of the Android distribution just to enable their testing activities; this is technically out of reach of most practitioner shops. 4.7 System Monitoring System monitoring represents all mechanisms for observing and noting device-level system behavior while an app is running. \(^8\) Java Native Interface \(^9\) An Android Activity is approximately the same as one screen view of an app, similar in concept to the scope of a webpage. 4.7.1 Technology Strengths and Limitations System monitoring of the core computing system is a mature and well-developed technology. CPU usage, memory usage (RAM), and disk usage are fundamental diagnostics on most computing systems. Because computers have been networked together for so long, monitoring of network communication interfaces is fairly robust, including wireless signal strength. But monitoring and diagnostic queries of other embedded equipment and sensors is less clear; sometimes these hardware components cannot be queried via software and require external monitoring devices (e.g. power meters to measure battery demand). 4.7.2 Implementation Limitations Very few testing approaches or tools are manipulating device system-state, even though that can have a dramatic effect on software behavior. Resource-starved computers can behave oddly and fail in unexpected ways. The impacts of misbehaving sensors and other embedded equipment are acknowledged but poorly mapped; computing professionals know systems can be affected by these things, but the types of impacts are not well-understood. 4.7.3 Built-in Program Diagnostics Similar to how system monitoring inspects and makes visible device-level system behavior, diagnostics built into a program inspect and make visible internal program state while software is running. Such diagnostics are more like features of the program than they are like Code Manipulations made for testing purposes after the features pass out of Development Focus. These built-in diagnostics may take the form of logging messages to log files as code execution passes through certain points; assertions and exception checking within the code’s logic may be used. In some cases, comprehensive diagnostic languages and controls may be built into the software to allow inspection of hundreds of different details about the execution of the software as it runs; this was common in testing devices like switchboard telephones and laser printers (C. Kaner, pers. comm.) and seems likely to be vital in the testing of IoT software. 4.7.4 System Monitoring Technology Summary System Monitoring is primarily applied to core computing elements and network communications, but other sensors and embedded equipment are rarely monitored. Built-in software diagnostics features are powerful testing tools, but they require extensive Development Focus work to design and implement; they provide an entire alternate interface to interacting with the software than the routes used by the ordinary traffic handled by the software. Because System Monitoring gathers data from a running system, it works in concert with other software testing technologies, widening the view of environmental and program state gathered from tests executed via other technologies. Monitoring the system with generic monitoring functions (e.g. CPU usage, network usage, etc.) increases the Level 2 (Closely Controlled) information available about test execution conditions. Programs with extensive built-in diagnostic features can be flexible enough to uncover Level 4 (Discovers Unexpected) test execution conditions. 4.8 Mobile Software Testing Technologies Recap As a software testing technology, Physical Devices yield highly trustworthy results. Cloud-based providers greatly expand access to the thousands of active mobile device types but do so in a server room environment that provides little scope for exercising embedded sensors and other equipment in mobile devices. No robust system for testing embedded components at scale is apparent. Cloud-based testing options are limited to what the frameworks supported by that service support, and customized testing options are quite rare. Virtual Devices effectively support Development Focus activities, but all mobile virtual devices lack completeness because significant hardware components are routinely omitted; Virtual Device technology alone is not capable of exercising sensors and other embedded equipment. Although pre-mobile expectations lead computing professionals to expect high-fidelity behavior from virtual devices labeled as “emulators”, the Android Emulator only represents an idealized runtime environment because it cannot know about the proprietary customizations of the Android OS made by hardware manufacturers and cell service providers. Simulation Environments contain rich potential to exercise all the embedded equipment in mobile devices, but they are complex and do not seem to be available to practitioners. The Visual GUI Tree offers direct access to the GUI objects users use, but historically fails to handle custom objects or objects which match a general pattern but vary in details (like error messages). It also does not handle variable GUI elements that are conditionally present based on runtime details, may appear in different places within a data set (like a list), and do not naturally contain uniquely identifying attributes. Visual GUI Tree technology alone is not capable of exercising sensors and other embedded equipment. Image Comparison technology – especially part-image comparisons powered by computer vision technology – filled a significant gap in automated GUI testing, enabling the testing of Visual Meaning problems. It runs more slowly than tests via the Visual GUI Tree, cannot address GUI elements not fully visible at the expected time, and currently does not handle basic display variations well. Changes in size, color, or style confuse the current tools, which also struggle with moving image content and with accurately reading text on-screen. Image Comparison technology alone is not capable of exercising sensors and other embedded equipment. Code Manipulation approaches can do anything software can do, but they rapidly grow quite complicated to create. Tools that propagate to industry typically encapsulate a small set of behaviors. The widespread use of xUnit frameworks, code coverage tools, and style checkers shows that well-packaged behaviors will be extensively used by practitioners. Some mobile mocking tools exist for testing requests of external resources, but these are primarily limited to network access. System Monitoring is robust for watching core computing elements and network communications but is not commonly used for checking sensors, other embedded equipment, or internal program state. Although this technology may be applied during Development Focus to isolate issues, very few Testing Focus tools or automation approaches incorporate system-state conditions to test mobile apps. Overall, the great variety of environmental unpredictability faced by mobile and IoT apps is poorly addressed because five of these seven technologies do not handle testing input from embedded sensors and equipment at scale. Most of the existing testing tools and research efforts function at Level 1 (Ready to Test) or Level 2 (Closely Controlled). Yet the remaining levels are where the capacity to truly address widespread environmental unpredictability – the hallmark condition of mobile and IoT computing – becomes viable. Simulation Environments and Code Manipulation technologies can do this, but at the cost of high complexity. In practice, cloud-based access to physical devices is somewhat mitigating the paucity of tools that can be used at Level 3 (Predictable Variations), Level 4 (Discovers Unexpected), and Level 5 (Exhausted Capacities), but these cloud services exercise only the device environment variable, not the embedded sensors and equipment of mobile devices. 5 Vision of a New Generation of Software Testing Tools The high-level review of computing history in Section 2 established that mobile and IoT computing are operating within a fundamentally different scale and scope of environmental unpredictability than programs in any prior computing era. The Release-Readiness Levels discussion in Section 3 clarified that mobile and IoT software need to be tested at Levels 3 (Predictable Variations) and Level 4 (Discovers Unexpected) to adequately assess likely field behavior within this vast environmental unpredictability. Some specific behaviors and some whole software systems need to be tested at Level 5 (Exhausted Capacities) because of their inherent life and safety impact or their core data integrity impact. Touring the seven core testing technologies in Section 4 showed that most existing tools and research efforts function at Level 1 (Ready to Test) or Level 2 (Closely Controlled), with a very few stretching partly into Level 3 (Predictable Variations). Only two software technologies provide means for exercising the functionality related to embedded sensors and equipment – Simulation Environments and Code Manipulation. Unfortunately, these two technologies quickly become very complex to apply, so their reach into the general practitioner testing population is limited. What most practitioners need to perform more effective mobile and IoT software testing are new tools that directly address the gaps not covered by the existing array of tools. 5.1 Requirements for Next Generation Testing Tools Therefore, to improve automated testing of mobile and IoT software, new approaches are required that meet the following requirements: - Requirement 1: Directly target functionality dealing with embedded sensors and equipment. - Requirement 2: Scale easily to vast variations in data readings, data fidelity, data delivery methods, etc. - Requirement 3: Constrain technological complexity to be within reach of non-specialists. Because Simulation Environments and Code Manipulation technologies satisfy Requirements 1 and 2, initial steps to produce new software testing tools are likely to be technically complex and to require programming skills to apply them. However, even these tools need to be within reach of non-specialists. In my dissertation work, attempting to build a tiny tool of this new generation, I encountered the need to be proficient in hardware device driver implementation, Linux kernel programming, cross-language programming and compilation, graphics subsystem implementation at the OS level, analyzing and fixing complex build dependencies, and navigating and deciphering very large codebases. That is a broad and deep set of technical skills, rare among senior developers and rarer amongst software testers. That skill set is more likely to be satisfied by a team of senior developers and build mavens than by one individual. Therefore, when I use “non-specialists” in Requirement 3, I mean: Experienced software testers with competent, generalist programming skills but without special technical expertise in any of the subsystems comprising mobile or IoT computing environments and without special technical expertise in the mathematical modelling commonly associated with simulation efforts. Once several new tools and software testing technologies have been developed to sufficient maturity, it will become possible to package some of those features into much more generally-accessible tools, as has already happened with code coverage tools and style checkers. 5.2 One Vision of New Types of Tools Extremely briefly, my vision for a new testing technology addressing these requirements combines integration testing and compiler knowledge. Integration testing combines multiple individual units of functionality into features of varying sizes and scopes without involving the application as a whole. Inside the program, data flow for these features begins in specific places and ends in other specific places. Interfaces written for mobile and IoT software communicate with these beginning and ending points, and the compiler knows how to connect these interfaces to the objects in the program’s code that do the work – it has to know this information to successfully build a deployable application. Excerpt from PNSQC Proceedings Copies may not be made or distributed for commercial use I envision testing environments that help testers interact with arbitrary paths through the source code by exposing these data-flow beginning and ending points and allowing testers to specify the data details inserted at the data-flow beginning points, as well as any other critical program state. Tests would then conclude after the actual source code processes the data, and the results of that processing would be read at the data-flow ending points. Such compiler-assisted testing tools would allow testers to perform rich Simulation Environment testing on custom slices of functionality, without requiring fully-featured reality-based environments and without requiring testers to understand all the underlying code – just the intent of the feature and how to manipulate data at the endpoints. 6 Conclusion Mobile and IoT computing operate within a fundamentally different scale and scope of environmental unpredictability than programs in prior computing eras. Accurately assessing the field performance of software for such devices requires testing at Level 3 (Predictable Variations), Level 4 (Discovers Unexpected), and Level 5 (Exhausted Capacities) – yet most existing tools and research function at Level 1 (Ready to Test) or Level 2 (Closely Controlled), with very few stretching partly into Level 3 (Predictable Variations). This is not good enough. Mobile and IoT computing need better software testing tools, ones that directly handle vast environmental unpredictability and operate easily at great scale. I have one vision for a possible new direction for tool development, but this is a large and complex problem. It needs the skills and experience of many minds brought to bear upon it. What types of tools can you imagine that would be useful to the field? 7 References Bauersfeld, Sebastian, and Tanja E. J. Vos. 2014. “User interface level testing with TESTAR; what about more sophisticated action specification and selection?” In SATToSE, 60–78. ———. 2018b, October 11.
{"Source-Url": "http://uploads.pnsqc.org/2019/papers/Oliver-Why-We-Need-New-Software-Testing-Technologies.pdf", "len_cl100k_base": 13834, "olmocr-version": "0.1.49", "pdf-total-pages": 22, "total-fallback-pages": 0, "total-input-tokens": 68004, "total-output-tokens": 17696, "length": "2e13", "weborganizer": {"__label__adult": 0.0003108978271484375, "__label__art_design": 0.0003440380096435547, "__label__crime_law": 0.0001982450485229492, "__label__education_jobs": 0.0014429092407226562, "__label__entertainment": 6.586313247680664e-05, "__label__fashion_beauty": 0.00013208389282226562, "__label__finance_business": 0.00019502639770507812, "__label__food_dining": 0.00023448467254638672, "__label__games": 0.0005488395690917969, "__label__hardware": 0.0007367134094238281, "__label__health": 0.00027561187744140625, "__label__history": 0.0001990795135498047, "__label__home_hobbies": 6.347894668579102e-05, "__label__industrial": 0.00019729137420654297, "__label__literature": 0.000274658203125, "__label__politics": 0.00013494491577148438, "__label__religion": 0.0002970695495605469, "__label__science_tech": 0.01141357421875, "__label__social_life": 8.291006088256836e-05, "__label__software": 0.00927734375, "__label__software_dev": 0.97314453125, "__label__sports_fitness": 0.00018358230590820312, "__label__transportation": 0.00029850006103515625, "__label__travel": 0.00015723705291748047}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 82352, 0.03156]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 82352, 0.64406]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 82352, 0.91968]], "google_gemma-3-12b-it_contains_pii": [[0, 1397, false], [1397, 5227, null], [5227, 9413, null], [9413, 11423, null], [11423, 15378, null], [15378, 19246, null], [19246, 22821, null], [22821, 26901, null], [26901, 31160, null], [31160, 34501, null], [34501, 38911, null], [38911, 43451, null], [43451, 47380, null], [47380, 51765, null], [51765, 56053, null], [56053, 59873, null], [59873, 63585, null], [63585, 67708, null], [67708, 71807, null], [71807, 75483, null], [75483, 78984, null], [78984, 82352, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1397, true], [1397, 5227, null], [5227, 9413, null], [9413, 11423, null], [11423, 15378, null], [15378, 19246, null], [19246, 22821, null], [22821, 26901, null], [26901, 31160, null], [31160, 34501, null], [34501, 38911, null], [38911, 43451, null], [43451, 47380, null], [47380, 51765, null], [51765, 56053, null], [56053, 59873, null], [59873, 63585, null], [63585, 67708, null], [67708, 71807, null], [71807, 75483, null], [75483, 78984, null], [78984, 82352, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 82352, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 82352, null]], "pdf_page_numbers": [[0, 1397, 1], [1397, 5227, 2], [5227, 9413, 3], [9413, 11423, 4], [11423, 15378, 5], [15378, 19246, 6], [19246, 22821, 7], [22821, 26901, 8], [26901, 31160, 9], [31160, 34501, 10], [34501, 38911, 11], [38911, 43451, 12], [43451, 47380, 13], [47380, 51765, 14], [51765, 56053, 15], [56053, 59873, 16], [59873, 63585, 17], [63585, 67708, 18], [67708, 71807, 19], [71807, 75483, 20], [75483, 78984, 21], [78984, 82352, 22]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 82352, 0.0]]}
olmocr_science_pdfs
2024-11-26
2024-11-26
d6c532e373d18150ebf5db5c2be50c9168e4471c
[REMOVED]
{"len_cl100k_base": 11682, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 50118, "total-output-tokens": 16026, "length": "2e13", "weborganizer": {"__label__adult": 0.000461578369140625, "__label__art_design": 0.0006399154663085938, "__label__crime_law": 0.0005812644958496094, "__label__education_jobs": 0.0023937225341796875, "__label__entertainment": 0.00017583370208740234, "__label__fashion_beauty": 0.00024700164794921875, "__label__finance_business": 0.0004010200500488281, "__label__food_dining": 0.0005593299865722656, "__label__games": 0.0008563995361328125, "__label__hardware": 0.0010852813720703125, "__label__health": 0.0009093284606933594, "__label__history": 0.000530242919921875, "__label__home_hobbies": 0.00024056434631347656, "__label__industrial": 0.000782012939453125, "__label__literature": 0.001110076904296875, "__label__politics": 0.0004851818084716797, "__label__religion": 0.0007791519165039062, "__label__science_tech": 0.25, "__label__social_life": 0.0002276897430419922, "__label__software": 0.0128631591796875, "__label__software_dev": 0.72314453125, "__label__sports_fitness": 0.0002918243408203125, "__label__transportation": 0.0007772445678710938, "__label__travel": 0.00021970272064208984}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 52150, 0.02676]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 52150, 0.79348]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 52150, 0.81666]], "google_gemma-3-12b-it_contains_pii": [[0, 4475, false], [4475, 9447, null], [9447, 11737, null], [11737, 17925, null], [17925, 24754, null], [24754, 28021, null], [28021, 32127, null], [32127, 37196, null], [37196, 42811, null], [42811, 45493, null], [45493, 50588, null], [50588, 52150, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4475, true], [4475, 9447, null], [9447, 11737, null], [11737, 17925, null], [17925, 24754, null], [24754, 28021, null], [28021, 32127, null], [32127, 37196, null], [37196, 42811, null], [42811, 45493, null], [45493, 50588, null], [50588, 52150, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 52150, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 52150, null]], "pdf_page_numbers": [[0, 4475, 1], [4475, 9447, 2], [9447, 11737, 3], [11737, 17925, 4], [17925, 24754, 5], [24754, 28021, 6], [28021, 32127, 7], [32127, 37196, 8], [37196, 42811, 9], [42811, 45493, 10], [45493, 50588, 11], [50588, 52150, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 52150, 0.16538]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
1dafa02d9137cbdf199abefc6bf35de49583f0bd
[REMOVED]
{"len_cl100k_base": 10253, "olmocr-version": "0.1.53", "pdf-total-pages": 24, "total-fallback-pages": 0, "total-input-tokens": 52262, "total-output-tokens": 15982, "length": "2e13", "weborganizer": {"__label__adult": 0.0005464553833007812, "__label__art_design": 0.0005116462707519531, "__label__crime_law": 0.00047898292541503906, "__label__education_jobs": 0.0007882118225097656, "__label__entertainment": 0.0001493692398071289, "__label__fashion_beauty": 0.0002930164337158203, "__label__finance_business": 0.0003924369812011719, "__label__food_dining": 0.0004963874816894531, "__label__games": 0.00099945068359375, "__label__hardware": 0.0078582763671875, "__label__health": 0.0011577606201171875, "__label__history": 0.0005402565002441406, "__label__home_hobbies": 0.00020742416381835935, "__label__industrial": 0.0012655258178710938, "__label__literature": 0.00028967857360839844, "__label__politics": 0.0003993511199951172, "__label__religion": 0.0009794235229492188, "__label__science_tech": 0.3564453125, "__label__social_life": 0.0001099705696105957, "__label__software": 0.0131378173828125, "__label__software_dev": 0.611328125, "__label__sports_fitness": 0.0005030632019042969, "__label__transportation": 0.0009560585021972656, "__label__travel": 0.00026869773864746094}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 56058, 0.06665]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 56058, 0.20342]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 56058, 0.85497]], "google_gemma-3-12b-it_contains_pii": [[0, 2063, false], [2063, 5096, null], [5096, 8059, null], [8059, 10661, null], [10661, 13665, null], [13665, 16409, null], [16409, 18891, null], [18891, 20304, null], [20304, 23545, null], [23545, 25913, null], [25913, 28365, null], [28365, 30708, null], [30708, 31325, null], [31325, 31862, null], [31862, 33641, null], [33641, 35988, null], [35988, 38023, null], [38023, 41256, null], [41256, 44086, null], [44086, 46445, null], [46445, 48762, null], [48762, 51092, null], [51092, 53709, null], [53709, 56058, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2063, true], [2063, 5096, null], [5096, 8059, null], [8059, 10661, null], [10661, 13665, null], [13665, 16409, null], [16409, 18891, null], [18891, 20304, null], [20304, 23545, null], [23545, 25913, null], [25913, 28365, null], [28365, 30708, null], [30708, 31325, null], [31325, 31862, null], [31862, 33641, null], [33641, 35988, null], [35988, 38023, null], [38023, 41256, null], [41256, 44086, null], [44086, 46445, null], [46445, 48762, null], [48762, 51092, null], [51092, 53709, null], [53709, 56058, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 56058, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 56058, null]], "pdf_page_numbers": [[0, 2063, 1], [2063, 5096, 2], [5096, 8059, 3], [8059, 10661, 4], [10661, 13665, 5], [13665, 16409, 6], [16409, 18891, 7], [18891, 20304, 8], [20304, 23545, 9], [23545, 25913, 10], [25913, 28365, 11], [28365, 30708, 12], [30708, 31325, 13], [31325, 31862, 14], [31862, 33641, 15], [33641, 35988, 16], [35988, 38023, 17], [38023, 41256, 18], [41256, 44086, 19], [44086, 46445, 20], [46445, 48762, 21], [48762, 51092, 22], [51092, 53709, 23], [53709, 56058, 24]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 56058, 0.1451]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
cef72487294111abc4bf16fd4fed49e69c263e65
[REMOVED]
{"Source-Url": "https://inria.hal.science/hal-01314885/file/article.pdf", "len_cl100k_base": 9683, "olmocr-version": "0.1.50", "pdf-total-pages": 18, "total-fallback-pages": 0, "total-input-tokens": 44807, "total-output-tokens": 12145, "length": "2e13", "weborganizer": {"__label__adult": 0.00033402442932128906, "__label__art_design": 0.00027489662170410156, "__label__crime_law": 0.0003616809844970703, "__label__education_jobs": 0.00045371055603027344, "__label__entertainment": 5.3763389587402344e-05, "__label__fashion_beauty": 0.00013935565948486328, "__label__finance_business": 0.00017130374908447266, "__label__food_dining": 0.00030517578125, "__label__games": 0.0005068778991699219, "__label__hardware": 0.0006780624389648438, "__label__health": 0.000400543212890625, "__label__history": 0.00018930435180664065, "__label__home_hobbies": 7.796287536621094e-05, "__label__industrial": 0.00033664703369140625, "__label__literature": 0.0002267360687255859, "__label__politics": 0.00024247169494628904, "__label__religion": 0.0004329681396484375, "__label__science_tech": 0.01496124267578125, "__label__social_life": 8.0108642578125e-05, "__label__software": 0.005031585693359375, "__label__software_dev": 0.9736328125, "__label__sports_fitness": 0.00030040740966796875, "__label__transportation": 0.0004851818084716797, "__label__travel": 0.0001761913299560547}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 48152, 0.025]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 48152, 0.48146]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 48152, 0.87358]], "google_gemma-3-12b-it_contains_pii": [[0, 908, false], [908, 3779, null], [3779, 7074, null], [7074, 9268, null], [9268, 11271, null], [11271, 13758, null], [13758, 16098, null], [16098, 19737, null], [19737, 22949, null], [22949, 24431, null], [24431, 26795, null], [26795, 29289, null], [29289, 32547, null], [32547, 35555, null], [35555, 37244, null], [37244, 40787, null], [40787, 44241, null], [44241, 48152, null]], "google_gemma-3-12b-it_is_public_document": [[0, 908, true], [908, 3779, null], [3779, 7074, null], [7074, 9268, null], [9268, 11271, null], [11271, 13758, null], [13758, 16098, null], [16098, 19737, null], [19737, 22949, null], [22949, 24431, null], [24431, 26795, null], [26795, 29289, null], [29289, 32547, null], [32547, 35555, null], [35555, 37244, null], [37244, 40787, null], [40787, 44241, null], [44241, 48152, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 48152, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 48152, null]], "pdf_page_numbers": [[0, 908, 1], [908, 3779, 2], [3779, 7074, 3], [7074, 9268, 4], [9268, 11271, 5], [11271, 13758, 6], [13758, 16098, 7], [16098, 19737, 8], [19737, 22949, 9], [22949, 24431, 10], [24431, 26795, 11], [26795, 29289, 12], [29289, 32547, 13], [32547, 35555, 14], [35555, 37244, 15], [37244, 40787, 16], [40787, 44241, 17], [44241, 48152, 18]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 48152, 0.0]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
5e99de4469f87f58fa0362fe20fb3bcfe1981de9
ABSTRACT Memory consistency specifications (MCSs) are a difficult, yet critical, part of a concurrent programming framework. Existing MCS testing tools are not immediately accessible, and thus, they have only been applied to a limited number of platforms. However, in the post-Dennard scaling landscape, there has been an explosion of new architectures and frameworks, especially for GPUs. Studying the shared memory behaviors of different devices (across vendors and architecture generations) is important to ensure conformance and to understand the extent that devices show different behaviors. In this paper, we present GPUHarbor, a widespread GPU MCS testing tool. GPUHarbor has two interfaces: a web interface and an Android app. Using GPUHarbor, we deployed a testing campaign that checks conformance and characterizes weak behaviors. We advertised GPUHarbor on forums and social media, allowing us to collect testing data from 106 devices, spanning seven vendors. In terms of devices tested, this constitutes the largest study on weak memory behaviors by at least 10x, and our conformance tests identified two new bugs on embedded Arm and NVIDIA devices. Analyzing our characterization data yields many insights, including quantifying and comparing weak behavior occurrence rates (e.g., AMD GPUs show 25.3x more weak behaviors on average than Intel). We conclude with a discussion of the impact our results have on software development for these performance-critical devices. CCS CONCEPTS - Software and its engineering → Empirical software validation; - Computing methodologies → Parallel programming languages; Graphics processors. KEYWORDS memory consistency, GPUs, mutation testing 1 INTRODUCTION The end of Dennard Scaling has brought about an explosion of multicore architectures that improve application performance through large-scale parallelism. Graphics Processing Units (GPUs) exemplify this trend and are now integral components of many systems, from smartphones to large HPC supercomputers. While GPUs were previously primarily used for graphics applications, they now have applications in a variety of areas including machine learning [42] and particle simulations used in drug development [38]. GPUs are even being used for security and safety-critical applications such as encryption [37] and self-driving cars [13], making safety and correctness an increasing concern on these devices. Because GPUs are produced by several vendors (NVIDIA, AMD, Intel, etc.) and evolve rapidly, many different devices are currently deployed. These devices vary both in their performance, as well as their functional behavior. To account for this, the community has developed portably GPU programming frameworks, such as Vulkan [22] and WebGPU [51], as unified abstractions to target these diverse devices. Memory consistency specifications (MCSs), which define the semantics of shared memory operations, are an important part of these abstractions. While MCSs provide many guarantees, such as atomicity and coherence, they often allow an architecture to implement weak memory behaviors to improve efficiency [34]. For example, x86’s relaxed MCS [44] allows store buffering behaviors, in which a processor may buffer the stored values before flushing them to a shared memory location; as a result, another processor may observe the buffered store occurring out-of-order. Because relaxed MCSs can be complex and nuanced, there is a history of platforms (compilers and architectures) containing MCS conformance bugs [1, 4, 25, 30, 31]. That is, the MCS provides a guarantee that the implementation does not honor. Due to the non-determinism of concurrency, MCS bugs may occur extremely rarely, or only when provoked, e.g., by side-channel stress [46]. Apart from conformance, a device’s weak behavior profile, i.e., the frequency at which allowed weak behaviors occur and how system stress influences this frequency, is also a useful metric. For example, this can be useful in developing conformance testing strategies [28] and enables developers to reason about tradeoffs between accuracy and performance in approximate computing that judiciously elides synchronization [35, 41, 43]. Unfortunately, previous GPU testing work had limited scope, testing only a small number of devices [25, 28], with the largest study testing eight devices [1]. These approaches did not scale due to the difficulty of portable GPU application development and deployment, e.g., while frameworks like OpenCL [21] are portable in theory, there are many difficulties in practice [47]. Consequently, little is known about the MCS conformance and weak behavior profiles at large. This is especially problematic as portable GPU frameworks depend upon many layers and environments (e.g., architectures, compilers, runtimes, operating systems, etc.); it is difficult to extrapolate insights from a small number of platforms tested in controlled environments to the diverse universe of deployed GPUs. ### 1.1 GPUHarbor In this paper, we present a large-scale study of GPU MCS testing, which, to the best of our knowledge, tests $10 \times$ more devices than previous studies. Figure 1 summarizes our study, including the number of GPUs that we tested (106), broken down by two frameworks (WebGPU and Vulkan) and seven vendors (Intel, Apple, NVIDIA, AMD, Arm, Qualcomm, and Imagination). This scale is empowered by GPUHarbor, a new cross-platform GPU MCS testing tool suite. GPUHarbor includes two front-ends, a browser web app (using WebGPU) and an Android app (using Vulkan). We advertised our web app on campus forums and social media to obtain a significant number of WebGPU results. We test much fewer Vulkan devices as our Android app is not yet widely accessible on the Google Play Store, but in Sec. 7 we discuss how we will enable larger mobile studies on both Android and iOS. GPUHarbor uses litmus tests, small concurrent programs that check for load/store reordering corresponding to weak memory behaviors. Current GPU MCS testing tools execute litmus tests many times in succession to check conformance and characterize devices [1, 25, 46]. However, these prior approaches have several shortcomings: (1) they are implemented in vendor-specific languages, e.g., CUDA; (2) they require expert users to build, configure, and execute tests on each device, e.g., as is the case for OpenCL; or (3) litmus tests were embedded in vendor-specific stress testing environments and thus, would not execute efficiently on other devices. This cumbersome litmus testing workflow made it infeasible to perform a large-scale study. In contrast, GPUHarbor defines litmus tests using a neutral configuration (written in JSON), which it compiles to a portable shading language (WGSL [50] or SPIRV [20]). The resulting litmus testing application then tunes the testing stress automatically. The net result is a fully automated and easy-to-use tool for GPU MCS testing at large. Table 1 shows how many weak memory litmus test iterations were run and how many weak behaviors were observed in our study. We perform the following two investigations on our data set: (1) we examine the results of MCS conformance tests and find two new bugs in mobile device GPUs from Arm and NVIDIA, and (2) we characterize weak memory behavior profiles, e.g., the rates at which allowed weak behaviors occur and their sensitivity to system stress. Additionally, we provide several analyses on the weak memory profiles. First, we comment on how per-vendor average profiles compare; for example, AMD shows an average percentage of 1.5% weak behaviors, while Intel shows only .06%. We then cluster different GPUs and find that, surprisingly, cross-vendor devices often have similar profiles, while devices from the same vendor sometimes have vastly different profiles. Finally, we discuss how the wide range of different profiles we observed can impact testing strategies and the implementation of synchronization algorithms. ### Contributions In summary, our contributions are: 1. **Tooling:** We introduce GPUHarbor, a new cross-platform GPU MCS testing tool with accessible web and Android interfaces (Sec. 3). 2. **GPU MCS Weak Behavior Characterization:** We conduct a large GPU weak memory characterization and conformance testing study, collecting data from 106 GPUs (Sec. 4). 3. **Conformance Testing and Analysis:** - (a) We discover two unreported bugs in Arm and NVIDIA devices (Sec. 5.1). - (b) We analyze statistical similarities across GPUs and describe the impact on testing strategies and device fingerprinting (Sec. 5.2). - (c) We discuss how weak behavior profiles impact the development and testing of synchronization algorithms on GPUs (Sec. 5.3). 4. **Lessons Learned:** We detail the lessons learned while designing and running this study, providing a guide to other researchers seeking to implement similar large experimental explorations (Sec. 6). All of the data we collected as part of our study, and the tools used to do so, are available as part of our artifact [27]. In addition, GPUHarbor’s web interface is hosted by UC Santa Cruz and can be found at https://gpuharbor.ucsc.edu/webgpu-mem-testing/. ### 2 Background In this section we provide an overview of memory consistency specifications (Sec. 2.1), define the litmus tests we run and how they allow reasoning about relaxed memory models (Sec. 2.2), and introduce GPU programming concepts from the WebGPU and Vulkan GPU frameworks, including descriptions of their MCSs (Sec. 2.3). 2.1 Memory Consistency Specifications Today, the memory consistency specifications for architectures, e.g., x86 [44], and languages, e.g., C++ [7], are formalized using mathematical logic. This formalism represents shared memory program executions as a set of memory operations, e.g., reads, writes, and read-modify-writes, and relations between these events, e.g., happens-before (hb). Allowed executions constrain define constraints on some of these relations, e.g., hb is required to be acyclic. The strongest MCS is sequential consistency (SC) [26], which states that concurrent program executions must correspond to a total hb order such that the order respects the per-thread program order, allowing events from multiple threads to be interleaved. In relaxed MCSs, the hb relation is a partial order, allowing various weak behaviors (i.e. executions that are not SC) if shared memory operations on multiple threads are not synchronized. There is a large body of work focused on formalizing MCSs, including a model for Vulkan’s [19]. WebGPU generally follows the Vulkan MCS, with prior work [28] formalizing portions of its MCS necessary for reasoning about simple litmus tests. However, for this work it is not necessary to understand the full formalization of the WebGPU and Vulkan MCSs, so we describe the necessary subset of the specification briefly and informally. In addition, we follow prior work on MCS testing [25, 28] and consider only \textit{trivially data-race-free} programs where all operations are atomic, as our intention is not to test the behavior of programs with undefined semantics (caused by data races). Our Target MCS. Because Vulkan is one of several backends to WebGPU, the MCS for WebGPU is a subset of the MCS for Vulkan. In order to provide a unified study across both frameworks, we target only the WebGPU MCS, which we then map to its Vulkan counterpart. The WebGPU MCS provides very little inter-workgroup synchronization due to the diversity of backends it targets, with the weakest backend being Apple’s Metal [6], which provides only relaxed atomic operations. These operations, which come from the C++ memory model [7], compile to plain loads/stores at the architectural level, but at the language level provide few synchronization guarantees between threads. The one inter-workgroup MCS property provided by WebGPU atomics is coherence, which states that memory accesses to a single location must respect sequential consistency; sometimes called SC-per-loc [4]. However, memory accesses to disjoint addresses are allowed to be reordered. Mapping these WebGPU atomics to Vulkan is straightforward; all WebGPU atomic accesses are simply mapped to SPIR-V atomic accesses with a relaxed memory order. While our testing campaign considers only relaxed memory accesses, Vulkan allows additional memory orders; specifically, acquire and release. While the precise semantics of these memory orders is complex, especially when combined with other relaxed atomics, we note that they are required to implement the required synchronization in many common concurrency constructs, such as a mutex.\footnote{WebGPU does not provide inter-workgroup acquire and release memory orders, so it is not currently possible to implement a well-specified mutex in WebGPU.} The \texttt{lock()} method needs to execute an acquire atomic operation when the mutex is obtained and the \texttt{unlock()} method requires executing a release atomic operation. If a mutex is implemented without these memory orders, it is possible to violate mutual exclusion, as we show in Sec. 5.3. 2.2 Litmus Tests Litmus tests are small concurrent programs that illustrate [45], compare [29, 48], and empirically test [1, 3, 25] MCSs. These tests contain a condition on the final state of local variables and memory values that checks for weak behaviors. For example, the program in Fig. 1a is known as the message passing (MP) litmus test, in which one thread writes to a memory location \( x \) followed by a write to \( y \), while a second thread reads from \( y \) and then \( x \). As mentioned earlier, in this work, we assume that all of the memory operations in a litmus test are atomic, which in languages that follow the C11 style MCS [7] ensures that the semantics of shared memory operations are well-defined. Additionally, unless explicitly noted otherwise, we consider these atomic operations to have a relaxed memory order, which allows compilers and hardware to aggressively optimize their execution. The condition underneath the test shows an outcome that only occurs in relaxed executions. In this case, the behavior corresponds to an execution where the read of \( y \) returns 1 but the read of \( x \) returns 0. While some relaxed MCSs do not allow this behavior, e.g., the x86 MCS [44], many other relaxed MCSs, especially ones for languages like C++ [7], do allow the behavior. As mentioned earlier, our target WebGPU MCS does not provide any guarantees outside of coherence, and thus the two memory accesses per thread (which target disjoint addresses) are allowed to be reordered. In cases where the weak behavior is allowed (both by the MCS and the implementation), the rate at which this behavior is observed on real systems is highly dependent on system stress. Early GPU application development work did not observe any weak behaviors, despite specifications allowing them [12]. However, later work added specialized system stress around the test execution and revealed many cases of surprising weak behaviors [1, 46]. Executing litmus tests on deployed systems can be used for two purposes, which we will illustrate using a litmus test \( L \) that can exhibit a weak behavior execution \( e \), and an MCS \( S \). (1) \textbf{Conformance testing}: if \( e \) is disallowed in \( S \) then we can check implementations of \( S \). That is, if a platform \( p \) claims... to implement $S$, then we can execute $L$ many times on $p$, checking for $e$. The observation of $e$ would indicate a bug. (2) **Profiling weak behaviors**: if $e$ is allowed on $S$, and a platform $p$ claims to implement $S$, then we can execute $L$ many times on $p$ to understand the extent to which that platform allows $e$. In some cases, $p$ might not show $e$ empirically, or maybe $e$ appears more frequently under a certain configuration of system stress. A collection of this type of data creates a weak memory profile for $p$. Prior work [28] has utilized weak memory profiles in highly tuned conformance testing. In that work, it was shown that allowed MP executions could be used to tune system stress for disallowed behaviors in associated conformance tests. For example, the MP-CO litmus test, shown in Fig. 1b, is similar to MP, except that every memory access targets the same memory address and different values are stored (required to identify a weak behavior). Given that there is only one address used in MP-CO, the weak behavior in this test is disallowed under coherence, and thus in the WebGPU MCS. If certain system stress reveals weak behaviors in the allowed MP litmus test, then, in the case where a platform contains a bug, it is likely to reveal the buggy behavior in the MP-CO conformance test. In Sec. 3.1 we show the litmus tests used in our experimental campaign, and in Sec. 5.1 we illustrate the effectiveness of the approach of prior work [28] by describing two new bugs. ### 2.3 GPU Programming This study targets two cross-platform GPU frameworks, Vulkan and WebGPU. Vulkan is a modern graphics and compute API that can be run on many Linux, Android, and Windows devices, and can target Apple devices through the MoltenVK [23] portability layer. WebGPU is designed to run in browser environments and is compiled to different backends depending on the operating system of the device (Direct3D [33] on Windows, Vulkan on Linux/Android, and Metal [6] on Apple devices). Both Vulkan and WebGPU define their own programming languages, called SPIR-V and WGSL respectively. Programs written in these languages are called shaders and run on the GPU, while the APIs used to allocate memory on the GPU and dispatch shaders are written in the language of the host device, commonly C++ for Vulkan and JavaScript for WebGPU. In this work, we discuss the complexities of writing tools that must be implemented in different languages and how future development (Sec. 7) could ease the difficulty of cross-platform GPU MCS testing. **GPU Execution Model.** GPUs run thousands of concurrent threads (invocations in Vulkan and WebGPU) organized hierarchically and executed in a single-instruction, multiple-thread (SIMT) format. To support this execution model, in WGSL and SPIR-V threads are partitioned into discrete workgroups, with built-in identifiers used to query a thread’s workgroup id. Workgroups are limited in size (e.g. 1024 in CUDA, with limits varying depending on the device in WGSL/SPIR-V) and have access to an efficient shared memory region. A group of threads organized into workgroups and running on the device is called a grid, with the number of threads per workgroup and the number of workgroups specified at dispatch time. All threads in the same dispatch have access to a global memory region. While our target MCS was discussed in the previous section, we note that GPU atomic operations can be annotated with a memory scope. Two common scopes in Vulkan and WebGPU are workgroup, which specifies that synchronization occurs only between threads in the same workgroup, and device, which specifies that synchronization occurs across all threads executing on the device. Threads within workgroups generally have access to efficient primitive barrier operations, e.g., workgroupBarrier in WebGPU. However, highly optimized implementations of important parallel routines (e.g. inter-workgroup prefix scans [32]) rely on fine-grained inter-workgroup communication. Thus, like prior work [25, 28], we see a more imminent need for testing MCS properties at the inter-workgroup level; which we keep as our sole scope for this work. Similarly, GPU programs have several different memory types, e.g. whether it is shared-workgroup memory or device-wide memory. Given that we consider only inter-workgroup interactions, we only consider device-wide memory. ### 3 SYSTEM OVERVIEW Building on approaches in prior work [28], we discuss our testing campaign (Sec. 3.1) and the development of our MCS testing tools that are easily accessible on a wide range of devices, summarized in Fig. 2. We overview each stage of the tooling, starting with litmus test generation (Sec. 3.2), moving on to the design of GPUHarbor’s... web interface and Android app (Sec. 3.3). We end the section by describing our data collection process (Sec. 3.4). ### 3.1 Litmus Test Selection The tests we utilize in our study build off of the MCS mutation testing strategy used in [28]. We use 32 mutants, out of which 24 are litmus tests with weak behaviors allowed by the WebGPU MCS. The mutants are used to find effective system stress to then run the conformance tests. Our results analysis focuses on characterizing the rates of weak behaviors of six of the mutants, one of which is MP (Fig. 1a), with the other five shown in Fig. 3. These tests enumerate all the combinations of four instructions on two threads that can lead to weak behaviors. Thus, they capture testing for all pair-wise memory reorderings. For example, the SB test checks for store-load reorderings, while the LB test checks for load-store reorderings. Additionally, these tests capture synchronization patterns used in common concurrency algorithms like a compare-and-swap spinlock. Because of this, prior work has also focused on these tests and has shown their utility in finding bugs in both applications and MCS implementations [25, 46]. Once the mutants are run, we use the weak behavior profile of a device to determine an effective system stress configuration to run conformance tests under. We utilize the 20 conformance tests from [28]. As a concrete illustration using one mutant and conformance test, we would run the MP test under many different system stress configurations to build a weak behavior profile. We then use the most effective configuration at revealing MP weak behaviors to run a closely related conformance test, e.g., MP-CO (Fig. 1b). This approach was shown to be effective at finding bugs in prior work [28] and we further show its effectiveness by discovering two new bugs: a violation of MP-CO on Arm devices and a violation of MP-CO on an NVIDIA device (see Sec. 5.1). ### 3.2 Litmus Test Generation We now discuss our tooling that generates and runs our testing and characterization campaign. Litmus test behaviors are nondeterministic and sensitive to system stress. Due to this, the shaders that run the litmus tests contain not only the actual litmus test instructions, like those in Fig. 3, but take in a number of parameters and provide functions that are used to construct system stress. To provide a standardized interface for defining litmus tests in different GPU languages, we built a tool, Litmus Generator (LitGen, (2) in Fig. 2), which is similar to previous litmus testing tools [3] but is specifically targeted to create GPU programs with system stress, as was shown is necessary for testing GPU MCSs [1, 25, 28]. LitGen takes litmus tests that are written in an abstract format, currently JSON, that specify the actions of the test (e.g. loads and stores) and the possible behaviors of the test, with a special designation being given to weak behaviors. The tests used in this work were all manually specified, as they are relatively small, but LitGen could be integrated with other tools that use formal models to generate litmus tests, e.g., [2, 48], which would provide more automation and account for more complicated tests and MCSs. LitGen outputs a test shader, which runs the test alongside system stress developed in prior work [25, 28], and a result shader, which aggregates the observed behaviors of the test. The result shader is generated separately from the test shader for several reasons: 1. Some tests, like 2+2W (Fig. 3e), examine memory locations for weak behaviors after all threads have finished executing the test. To avoid relying on synchronization features (some of which we are trying to test), we instead pass the test memory buffer into a new result aggregation shader, which executes after the test shader. 2. LitGen implements parallel testing, described in [28], which runs thousands of instances of each litmus test concurrently. Thus, it is natural to leverage the inherent parallelism of the GPU to also aggregate the many results, which otherwise may be time-consuming to do on the CPU, especially since it requires copying memory from the GPU to the CPU. Currently, two backends exist for LitGen. The tool outputs WDSL shaders directly, as WDSL is a text-based language. SPIR-V, on the other hand, is a low-level representation similar to LLVM, increasing its flexibily but making code generation more complex. <table> <thead> <tr> <th>Initialize: x = 0; y = 0;</th> <th>Initialize: x = 0; y = 0;</th> <th>Initialize: x = 0; y = 0;</th> </tr> </thead> <tbody> <tr> <td>thread 0</td> <td>thread 1</td> <td>thread 0</td> </tr> <tr> <td>( r0 = L(x); )</td> <td>( r1 = L(y); )</td> <td>( r0 = L(y); )</td> </tr> <tr> <td>( S(x, 1); )</td> <td>( S(y, 1); )</td> <td>( r0 = L(y); )</td> </tr> <tr> <td>( S(y, 1); )</td> <td>( S(x, 1); )</td> <td>( r1 = L(x); )</td> </tr> </tbody> </table> Weak Behavior: \( r0 == 1 \&\& r1 == 1 \) (a) Load Buffer (LB) <table> <thead> <tr> <th>Initialize: x = 0; y = 0;</th> </tr> </thead> <tbody> <tr> <td>thread 0</td> </tr> <tr> <td>( S(x, 1); )</td> </tr> <tr> <td>( S(y, 1); )</td> </tr> <tr> <td>( r0 = L(x); )</td> </tr> </tbody> </table> Weak Behavior: \( r0 == 0 \&\& y == 2 \) (d) Read (R) <table> <thead> <tr> <th>Initialize: x = 0; y = 0;</th> <th>Initialize: x = 0; y = 0;</th> </tr> </thead> <tbody> <tr> <td>thread 0</td> <td>thread 1</td> </tr> <tr> <td>( S(x, 2); )</td> <td>( S(y, 2); )</td> </tr> <tr> <td>( S(y, 2); )</td> <td>( S(x, 1); )</td> </tr> </tbody> </table> Weak Behavior: \( x == 2 \&\& y == 2 \) (e) 2+2 Write (2+2W) Figure 3: These litmus tests, along with MP from Fig. 1a, represent six classic weak behaviors allowed by relaxed MCSs. S and L signify a relaxed atomic store and load, respectively. Because of this, prior work has also focused on these tests and has shown their utility in finding bugs in both applications and MCS implementations [25, 46]. Then, it utilizes Clspv [15] (\textit{LitGen}, therefore, for Vulkan backends). This reduces the chances of errors and provides us with a standard-api that pulls up a submission form, which submits the results without redeploying any code. ### 3.3 GPUHarbor Design Previous GPU MCS studies have been limited in reach due to the difficulty in deploying cross-platform GPU applications [47]. One of the reasons for this is the fractured landscape of GPU development. NVIDIA’s popular CUDA framework [36] is used for many data science applications but is only supported on NVIDIA GPUs. OpenCL [21] was introduced in 2009 by Apple as a cross-platform standard; however, today Apple no longer supports OpenCL and instead requires developers to use their proprietary Metal API [6]. Another issue with previous GPU MCS testing approaches has been a reliance on expert users to run testing campaigns. For example, using OpenCL would require users to install the required drivers, build the application under a specific environment, etc. To collect data from the diversity of devices necessary to gain confidence in cross-platform GPU frameworks, tools must minimize the friction in setting up and running tests. The tools we introduce here are easily distributed applications with a user-friendly interface on top of new GPU frameworks so that even non-technical users can collect data on their devices and submit results for analysis. To this end, we introduce GPUHarbor: a GPU MCS testing tool with two widely supported and accessible frontends, a web interface and an Android app (\(\mathcal{D}\) and \(\mathcal{E}\) in Fig. 2). GPUHarbor’s web interface and Android app have a common design with two functions: exploring (\(\mathcal{D}\)) and tuning/conforming (\(\mathcal{E}\)). "Explore" pages run specific litmus tests, display histograms of results, and provide the ability to adjust various parameters that control system stress. When tuning and conforming, a set of tests are chosen to run with multiple random system stress configurations, searching for configurations that maximize the rate of weak behaviors and uncover bugs in MCS implementations. While this study includes the largest collection of data on mobile GPU MCS behaviors, in Sec. 7 we discuss future work that could increase the reach of mobile GPU MCS testing even further. #### Exploring Figure 4 shows a screenshot of GPUHarbor’s web interface explore page for the MP litmus test after the test has been run with relatively high systems stress on a MacBook Pro with an integrated Intel Iris GPU. The top of the page (\(\mathcal{D}\)) includes a description of the test and pseudocode showing the test instructions. The right-hand side (\(\mathcal{E}\)) includes an editable list of the parameters that define system stress, along with several presets. When the test is running, the histogram (\(\mathcal{F}\)) updates in real-time with the number of times each behavior is observed. The progress bar gives an estimate of how much longer is left to run, based on the speed of previous iterations. The green bars correspond to sequential behaviors, where one thread runs entirely before the other. The blue bar corresponds to interleaved behaviors, where actions from each thread are interleaved (e.g. leading to the behavior \(r0 == 0 \&\& r1 == 1\) in the MP litmus test). The red bar corresponds to weak behaviors; in this run, three MP weak behaviors were observed out of over 13 million test instances, so the histogram shows behaviors (using a log scale, as weak behaviors are relatively rare). #### Tuning and Conforming Both the web interface and the Android app can be used to tune system stress, as in [28]. When tuning, a set of tests can be selected, with presets available for weak memory tests (e.g. those in Fig. 3) and conformance tests, e.g., to test coherence. Testing options like the number of configurations, the maximum number of workgroups, and other parameter overrides can be modified to run different experiments and check specific tests without redeploying any code. To collect data from volunteer users across a diverse set of devices, we strive to minimize the options users have to configure. This reduces the chances of errors and provides us with a standardized dataset to analyze. The web interface’s tuning page, therefore, includes a tab that exposes no configuration options, but instead shows only a few buttons: one button that starts a combined tuning/conformance run with default parameters: and another button that pulls up a submission form, which submits the results along with some (optional) contact information. Our results are all anonymized; contact details were only collected if users wanted to be informed about the outcome of the study. Before submitting, users agreed that their anonymized results could be aggregated, reported on, and released as part of this study. The data is then analyzed using Python scripts. The Android app is not yet available on the app store nor is it integrated with the SQLite backend, so results are manually copied off of the device for analysis. In Sec. 7 we discuss how we can reduce the friction for submitting mobile app results, and thus, increase the reach of future studies. Nevertheless, our study of eight devices is the largest testing campaign of mobile GPU MCS behaviors of which we are aware. While system stress configurations are generated randomly, we would like to ensure that the configurations run on different devices are the same for data analysis purposes. That is, if different GPUs are tested with the same stress configurations, we can compare how the different devices behaved under the same stress. We ensure this by integrating a seedable Park-Miller random number generator into both the web interface and the Android app and using the same seed when running all of our tuning experiments. By default, browsers only expose limited information about the user’s GPU without turning on vendor-specific development flags due to privacy and security concerns around fingerprinting. In order to have as much information as possible about our data, we included instructions asking users to temporarily enable flags describing the GPU architecture, such as “intel gen-9” or “nvidia ampere”. All Apple devices reported an architecture of “common-3”, making it impossible to immediately distinguish M1’s vs M2’s. However, we show in Sec. 5.2 that our data can be used to infer device information, hindering the ability of browsers to hide the specifics of a user’s GPU. 4 INITIAL RESULTS: WEAK BEHAVIOR CHARACTERIZATION To collect data from as many sources as possible, we disseminated the link to GPUHarbor’s web interface to the general public, utilizing campus forums and social media, and ran the Android app on eight devices that we could physically access. As shown in Tab. 1, we collected data from millions of tests; each test used a randomly generated system stress configuration (we used 50 configurations on the web interface and 150 on the Android app). In each configuration, tests were run millions of times based on a randomly generated number of workgroups and threads per workgroup. To ensure data integrity, we implemented a checksum algorithm that verified we saw the expected number of overall behaviors based on the system stress configuration. The testing duration was also recorded, however, we ran into one issue here. Some computers went to sleep in the middle of the tests, suspending the browser’s process and leading to extremely long recorded test times. To overcome this, we recorded testing time on a per test/configuration basis; we then filtered the results so as to not include any test/configuration durations over one minute. We note that each individual test runs quickly (e.g. in less than 5 seconds), thus, runs that were over one minute were most likely when the computer went to sleep. To approximate the length of the test that was suspended, we used a neighboring test’s time. One consideration for collecting data from the wider public is that we cannot afford to run tests for hours at a time. Previous work targeted only a few devices, running tests on one device for a minimum of 36 hours or 2 hours. However, asking volunteer users to leave their browsers and computer open for that long is impractical and would certainly decrease the number of submissions. Therefore, we heuristically chose the number of test environments and iterations per environment, aiming for the tests to finish in 10-20 minutes. Figure 5 shows the distribution of testing time on our web interface, broken down by vendor. The results show that NVIDIA devices were the fastest on average, mostly running all tests in under 15 minutes. On the other hand, Intel devices ran slower, with two older Intel GPUs taking over an hour and a half to complete. In the rest of this section, we analyze our WebGPU and Vulkan data to characterize the rates at which weak behaviors occur on devices from different vendors. These initial results motivate three research questions, which are explored in depth in Sec. 5: 1. Do MCS bugs exist in the wild, especially in GPUs which are relatively untested (Sec. 5.1)? 2. Can our characterization data be used to identify similarities between GPUs (Sec. 5.2)? If so, then our data can be used to develop new testing strategies or to expose potential new browser fingerprinting vulnerabilities. 3. How can a weak behavior characterization study be used in programming guides for implementing synchronization constructs, e.g. mutexes (Sec. 5.3)? 4.1 Weak Behaviors in WebGPU Figure 6 shows the average rates of observed weak behaviors for the six litmus tests of Fig. 3 (plus MP) in the test environment that maximizes the rate on each device broken down by test and vendor. As described in Fig. 1, we have data from at least 15 devices from each vendor. The overall testing time across all 98 devices was 31.1 hours, an average of 19 minutes per device. Devices from all vendors showed weak behaviors on each litmus test. In all but two cases, observing weak behaviors was all or nothing; if a device revealed weak behaviors on one litmus test, it revealed weak behaviors on all of them. In contrast, on a device implementing x86’s TSO MCS, we would expect to only see store buffering behaviors. However, unlike x86, GPU devices do not provide low-level details, such as the hardware-level MCS, thus it was not clear what types of weak behaviors we would observe. These results show that many GPUs implement very relaxed memory models, in contrast to stronger CPU architectures like x86 TSO. Intel devices tended to have the lowest rate of weak behaviors, with just over half of them (15/26) revealing weak behaviors on each test. The median rate of weak behaviors on Intel devices was even lower than their average, around 0.02% for each test. No Intel device showed a rate of weak behaviors above 1% on any test. NVIDIA devices revealed weak behaviors at a relatively low rate. Our results include results from NVIDIA’s Kepler (2012), Maxwell (2014), Pascal (2016), Turing (2018), and Ampere (2020) architectures, with a majority being the more recent Ampere. Older devices generally showed fewer weak behaviors, with the minimum on each of the six tests being Kepler and Maxwell devices. However, one outlier is that the maximum rate of SB behaviors (7.3%) was seen on a Kepler device. Interestingly, that device was also the only device not to observe any weak behaviors on S, LB, and 2+2W. The only other device not to reveal weak behaviors on a test was a Quadro K620 with a Maxwell architecture, on MP, R, and SB. Apple devices were consistently weak, revealing weak behaviors on every device and test, generally at a higher rate on all tests than NVIDIA devices but with less variation than AMD devices. Apple GPUs have only recently been built into non-mobile devices, so these results represent the first comprehensive evaluation of the weak behaviors on Apple GPUs. We don’t have the specific name of every Apple device, but we were able to collect enough information to show we had results from Apple M1 (basic, Pro, Max) and Apple M2 (basic, Pro) devices. AMD devices were also very weak, with 100% of devices showing weak behaviors on every test. The clear highest average rate occurs on the SB litmus test on AMD GPUs. Most of the AMD devices show a high rate of weak behaviors on SB, approaching 10% and higher, but devices with AMD’s Graphics Core Next 5 micro-architecture all showed rates under 1%. This means that even from a single vendor, the behaviors of different architectures can vary widely and past results from one vendor cannot be counted on to predict future behaviors. ### 4.2 Weak Behaviors in Vulkan The data in Tab. 2 shows the percentage of weak behaviors in the test environment that maximizes the rate at which they occur for our Android devices. In contrast to our web GPUs, in the mobile setting, weak behaviors were observed in every test on only one device, the NVIDIA Tegra X1, but the rates on this device were very low, beneath 0.1%. The most difficult test to observe in general was R, which checks whether a store is reordered with a following load on one thread. We did not observe any weak behaviors on the Imagination GPU; because testing is fundamentally incomplete, this could mean that the device implements a strong MCS, or that our testing approach was not effective. Interestingly, ARM only showed weak behaviors in the MP test. We observe that, in general, the rates of weak behaviors increase as devices become more powerful. This is especially apparent from the four Qualcomm devices we test, as the rate of weak behaviors increases from 0% on the Adreno 610 (which has 96 shading units, analogous to NVIDIA’s CUDA cores) up to a maximum of 14.37% in SB on the Adreno 660 (with 512 shading units). One intuitive explanation for this might be that smaller GPUs lack the ability to schedule as many threads at once, naturally reducing the rates of weak behaviors despite architectures that might allow them. We see a similar trend on the Arm GPUs, where the smaller Mali-G71 (32 shading units) showed a lower rate of weak behaviors than the larger Mali-G78 (384 shading units). ### 5 INSIGHTS AND IMPACTS We now set out to answer the three questions posed in Sec. 4 using our data and characterization of weak behavior rates. #### 5.1 MCS Bugs Our conformance testing campaigns discovered bugs on several vendors’ devices when running under the Vulkan and WebGPU frameworks. (1) **Arm:** We observed coherency violations of the MP-CO litmus test when using the Vulkan framework on two Arm GPUs, a Mali-G71 and a Mali-G78. These bugs were reported to and confirmed by Arm, leading to a compiler fix to insert a... missing memory fence. Arm has also added regression tests based on the pattern of the violation we reported. (2) **NVIDIA**: We also observed violations of the MP-CO test when run using the Vulkan framework on an NVIDIA Tegra X1. Additionally, our WebGPU conformance test results revealed violations of a different coherence test, RR, on an NVIDIA Quadro P620 running on a Linux desktop (therefore using Vulkan as the native framework). The combined report of the bug on the Tegra X1 and Quadro P620 helped NVIDIA find and fix a bug in their Vulkan compiler. NVIDIA also noted that this bug affected the Vulkan compiler in all pre-Volta architecture GPUs. (3) **Apple**: Our WebGPU conformance tests reveal coherence violations in RR from eight other devices, all running on Apple machines. Five of these devices were Intel GPUs, two were NVIDIA GPUs, and one was an AMD GPU. MCMutants observed the same issue on an Intel integrated device on a MacBook and reported the issue to Apple [28]; the bug has not been confirmed or fixed. On Intel, it is likely that these bugs are instances of the same issue, but our results are the first time the bug has been observed on non-Intel GPUs. We found all of the bugs by running conformance litmus tests under tuned system stress. We choose the system stress using the methodology described in prior work [28], i.e., by selecting a system stress configuration effective at revealing weak behaviors in an associated weak memory litmus test. For the MP-CO bugs on the Arm and NVIDIA devices, we perform a correlation analysis to empirically validate the methodology. We run both the MP and MP-CO litmus tests in 150 randomly generated system stress configurations on each of the Android devices showing the bug, recording the rate of weak behaviors in MP and the rate of buggy (i.e., non-coherent) behaviors in MP-CO. Our results show that the Pearson Correlation Coefficient (PCC) between the rate of weak and buggy behaviors is 0.732 on the Arm Mali-G71, 0.759 on the Arm Mali-G78, and 0.832 on the NVIDIA Tegra X1. Since these behaviors are recorded from 150 samples (i.e., system stress configurations), we have 148 degrees of freedom, and running a Student’s t-test leads to a p-value less than $10^{-5}$ on each device. This shows that the PCC between weak behaviors and bugs is certainly not due to random chance, further validating that configurations tuned using weak behaviors are effective at revealing bugs in conformance tests. ### 5.2 GPU Similarity All of the data was collected by running the tests with pseudo-randomly generated system stress configurations, but as mentioned in Sec. 3.4 the generator is seeded with a known value. Thus, we can compare how different GPUs behave under the same stress parameters. To do this, each testing run is represented as a vector of the non-sequential (i.e., where one thread runs entirely before another one) behaviors of every test in each configuration. Ignoring the sequential behaviors gives the data a degree of freedom, which is necessary for calculating a valid similarity measure. For our similarity metric, we choose cosine similarity, which measures the cosine of the angle between two vectors and ranges from -1 to 1. We chose cosine similarity because it is a relative metric, not an absolute like Euclidean distance, meaning that devices that show different absolute rates of behaviors, but at similar relativity, are classified as more closely related. **Device Identification.** Table 3 shows a summary of the similarity between devices in our study. All similarities are positive, with the minimum being 0.477 between an Intel and AMD device. This is not surprising, since effective system stress is likely to reveal weak behaviors on many devices. However, the average and median similarities between devices from each vendor are higher than the overall average and median, showing that in general devices from the same vendor tend to have more similar MCS behaviors. For Apple and NVIDIA, we confirmed that the maximum similarity occurs between identical GPUs: two Apple M1 Max’s and two NVIDIA GeForce RTX 3080s. For AMD, we observe a maximum similarity of 0.989 between two devices, one of which is a Radeon Pro 5500M (A) while the other device (B) did not report a model and instead only indicated that it was from the same architectural generation as A. However, we observed a high similarity (0.985) between A and another Radeon Pro 5500M (C), as well as a similarity of 0.984 between B and C, so it seems likely that A, B, and C are all the same device. We do a similar analysis with Intel to determine that an unknown device is most likely an Intel Iris Xe Graphics. While we are most interested in using this data to help choose conformance test strategies, as shown next, we also note that GPU MCS behavior data like this exposes a fingerprinting vulnerability, despite the specification trying to hide specific device information for security reasons. **Clustering Based Testing Strategies.** K-means clustering attempts to minimize the distortion, or the sum of squared distances between vectors and a centroid. Applying k-means clustering to GPU MCS behavior has implications for testing strategies; when developing <table> <thead> <tr> <th>Vendor</th> <th>Avg</th> <th>Median</th> <th>Min</th> <th>Max</th> </tr> </thead> <tbody> <tr> <td>Intel</td> <td>0.870</td> <td>0.891</td> <td>0.683</td> <td>0.985</td> </tr> <tr> <td>Apple</td> <td>0.903</td> <td>0.913</td> <td>0.699</td> <td>0.993</td> </tr> <tr> <td>NVIDIA</td> <td>0.903</td> <td>0.931</td> <td>0.670</td> <td>0.996</td> </tr> <tr> <td>AMD</td> <td>0.904</td> <td>0.927</td> <td>0.661</td> <td>0.989</td> </tr> </tbody> </table> <table> <thead> <tr> <th>Vendor</th> <th>Avg</th> <th>Median</th> <th>Min</th> <th>Max</th> </tr> </thead> <tbody> <tr> <td>Intel</td> <td>0.840</td> <td>0.862</td> <td>0.477</td> <td>0.996</td> </tr> </tbody> </table> Table 3: Each row shows the cosine similarity statistics between all pairs of devices from that vendor. The last row shows the similarity statistics across all pairs of devices. Table 4: Device clustering shows that choosing one device from each vendor is not an optimal way to test applications that utilize the MCS. Table 5: Analysis of three locking algorithms, Test-and-Set (TAS), Test and Test-and-Set (TTAS), and Compare-and-Swap (CAS), showing in how many test runs (out of 1000) we observed failures of unfenced (UF) and fenced (F) lock implementations to protect a critical section. The total time to run all tests on each device is also recorded. <table> <thead> <tr> <th>Device</th> <th>TAS Time (min)</th> <th>TTAS UF</th> <th>TTAS F</th> <th>CAS UF</th> <th>CAS F</th> </tr> </thead> <tbody> <tr> <td>Adreno 610</td> <td>3.5</td> <td>0 0</td> <td>0 0</td> <td>0 0</td> <td></td> </tr> <tr> <td>Mali-G78</td> <td>67.9</td> <td>18 0</td> <td>11 0</td> <td>7 0</td> <td></td> </tr> </tbody> </table> The data in Table 5 shows that cross-platform GPU applications that rely on shared memory operations, testing these applications on a number of devices can increase confidence in the correctness of the implementation. A naive strategy might be to choose one device from each major vendor, but our results show that this is not necessarily an optimal strategy. When selecting which devices to test a shared memory GPU application on, the strategy should be to first choose a number of clusters based on the rate of decrease in distortion leveled off at 6 clusters. The clustering data shows that devices from the same vendor are generally placed into the same cluster, but there are outliers in each case. The only NVIDIA Kepler device in our study was dissimilar enough from other devices that it was placed in its own cluster. Kepler is also the oldest NVIDIA architecture in our study, showing that special testing attention might be needed when supporting older devices in cross-platform GPU frameworks. In this section we discuss important lessons learned while developing and running our study. 5.3 Implementing Synchronization Algorithms We now discuss a use case of how the diversity of weak memory profiles across different GPUs can impact software development. Locking algorithms are implemented using atomic operations to synchronize access to critical sections. Implementation of locks depends on careful placement of memory fences to avoid compilers and hardware from reordering memory accesses, which can cause critical section failures. In this section, we implemented three common spin-locks: test-and-set (TAS), test-and-test-and-set (TTAS), and compare-and-swap (CAS). Each of these locks specifically needs to disallow MP behaviors using acquire/release memory fences. However, our results in Sec. 4 show that on some mobile devices MP weak behaviors never occur, meaning that if the locks are tested on these devices, they may run correctly despite being incorrectly implemented (according to the specification). To investigate this, we tested our three locks on two Android devices, an Arm Mali-G78 and a Qualcomm Adreno 610. The locks were implemented both with and without appropriate acquire/release memory fences. In these tests, threads from different workgroups acquire the lock 10k times and increment a non-atomic memory location in the critical section. We ran this test for 1k iterations and recorded the number of critical section violations we observed for each device and each lock. On the Arm Mali-G78, a larger GPU which exhibits a relatively high rate of MP behaviors, we observed critical section failures in unfenced versions of all three locks; in every failure case except one the value was 189,999 instead of 190,000, meaning that just one of the increments was not reflected. In the remaining failure case, the value was 189,998. On the Qualcomm Adreno 610, which exhibited no MP behaviors in our study, we saw no failures. Both devices exhibited no failures when locks were run with correct fences. Therefore, when writing applications that require synchronization, care must be taken to ensure the application is tested on devices where incorrect implementations will lead to failures, highlighting the importance of collecting and characterizing MCS behavior data. 6 LESSONS LEARNED In this section we discuss important lessons learned while developing and running our study. Ease of Use. Technical studies of low-level details like memory consistency specifications [3, 25, 46] have been run by expert practitioners and involve installing special software (e.g. OpenCL/CUDA drivers) and running experiments from command line interfaces. However, experiments that solicit non-technical users require accessible and frictionless interfaces in order to collect many results. For example, we initially had users download their results and email them to us directly, but found that many users would not take this seemingly small step. Thus, we implemented a way to submit results by simply clicking a button. This required substantial engineering effort, both to set up a client/server infrastructure and to distribute the tools in a non-technical way (e.g. through web browsers/app stores). Once implemented this workflow also had the benefit of making our experiments standardized; instead of relying on users to configure their system and choose the right options, all of this was baked in so that users only had to click a few buttons to run and submit results. Testing Time. Previous studies ran tests for hours or days, but it is unrealistic for volunteer users to run experiments that long on their devices. Therefore, we explored the trade-off space between experiment time and behavior coverage. Through trial and error, we determined parameters that allowed us to collect high-quality, standardized data in a short time frame, utilizing testing techniques from prior work that increased testing speed and provided statistical measures of reproducibility [28]. Enabling New Research Questions. Important research questions on memory consistency, including the three from Sec. 4, require performing a large-scale study. For example, previous studies [25] have attempted to create portable testing strategies, but could only provide limited guidance on choosing representative sets of devices to test on due to the small number of devices in their evaluation. On the other hand, our data shows that GPUs from different vendors... can behave similarly under stress, and thus portability may not be vendor-specific. Therefore, increasing the scale of evaluation through faster and more accessible testing should be an important factor when developing new testing strategies for a diverse (and ever growing) set of devices. **Extensibility.** When we first designed our LitGen tool, it only generated SPIR-V shaders. However, as we started focusing on testing WebGPU’s MCS, LitGen’s neutral configuration language (JSON) allowed us to easily write a backend generator for WGSL shaders. Ensuring our tools are extensible means that they might also be useful for researchers testing other areas of GPU specifications, e.g., floating point operation accuracy. In the same vein, our initial app only targets Android devices running Vulkan, but as we seek to expand the scope of our testing, we plan on developing an app that will work on both Android and iOS devices. ## 7 FUTURE WORK This work has spent significant engineering effort enabling the testing of many different GPUs. However, given the difficulty of cross-platform GPU programming, we were still unable to test mobile Apple GPUs, which appear in some of the most widely used mobile devices. Additionally, our web interface and Android app contain distinct user interfaces and GPU setup code, causing duplicate efforts and maintenance. In this section, we outline a path forward, with Flutter as a fitting match for these goals. Flutter [17] is an open-source software development kit developed by Google that provides deployment options to desktop platforms (such as Windows, macOS, and Linux), mobile platforms (Android, iOS), and even web deployment from a single frontend codebase. With a unified codebase for the MCS testing front end, development work can be focused on designing backend implementations specific to those platforms. Underlying Flutter is Dart [16], a language also developed by Google for cross-platform app development. For each supported platform, Flutter provides an interface to backend code native to the specific platform. On the Android end, GPU access is provided through Dart’s foreign function interface (FFI) library to load a dynamically linked C library, compiled against the version of Vulkan provided by Android’s Native Development Kit (NDK) [14]. The Dart FFI library can be used similarly on all supported platforms except for the web, for which GPU access will involve calls to JavaScript code utilizing WebGPU. Vulkan, while well-supported on Windows, Linux, and Android devices, is not officially supported by macOS and iOS clients. For these platforms, there are two possible options. For a more native-friendly option, Vulkan backend code could be instead rewritten to depend on Apple’s Metal [6] API, with SPIR-V shaders transpiled to the Metal Shading Language (MSL) using SPIRV-Cross [24], a tool developed by Khronos Group. However, to reduce development time and duplicate code across multiple platforms, Vulkan backend code can be passed through MoltenVK [23], a Khronos Group implementation of a large subset of Vulkan 1.2 on top of Metal. This provides a portability layer with which to run Vulkan applications on iOS and macOS platforms. We also plan on integrating our new tools with the current server backend, allowing us to collect data from devices we do not have physical access to using a simple API interface. With a single source for interface design, GPU setup, and data collection, it is expected that future work will be able to deploy MCS testing at a wider scale and collect results from GPU hardware previously inaccessible in related work. ## 8 RELATED WORK **Testing MCSs.** Work on testing MCS dates back to tools like ARCHTEST [49] and TSOTool [18], which each generated test programs containing sequences of loads and stores and then looked for violations of sequential consistency. With the introduction of formal MCSs, researchers developed tools like Litmus [3], which runs litmus tests generated from formal models directly on ISAs (namely x86, Power, and Arm) and includes stress parameters that make weak behaviors more likely. Techniques for GPU MCS testing have been extended to GPUs [1, 25]. Weak behaviors on GPUs are notoriously difficult to reveal, leading to work that statistically analyzed tuning techniques and reproducibility of results when running litmus tests on GPUs [25]. To better evaluate the efficacy of test environments and provide confidence in MCS implementations, [28] introduced a methodology based on black-box mutation testing [8], finding bugs in several WebGPU MCS implementations. Previous studies have been limited in the number of devices they were able to test. In contrast, this study introduces tooling that allows us to conduct the largest ever GPU MCS testing campaign, running tests across 2 frameworks, 7 vendors, and 106 devices. **Testing at Scale.** Other studies have tested large numbers of devices, searching for bugs in compilers and hardware. In [11], 17 GPU and driver combinations were tested for compiler bugs. Our approach, distributing the GPU MCS testing experiment using a web interface, is a form of volunteer computing, where the general public volunteers their computing resources for research studies. Volunteer computing has been used for many compute-intensive tasks, including searching for extraterrestrial life [5], training neural networks [10], sequencing genomes [40], and climate modeling [9]. ## 9 CONCLUSION We introduce GPUHarbor, a tool suite with a web interface and Android app for accessible cross-platform GPU MCS testing. We utilize GPUHarbor to perform a large-scale study on weak behaviors in 106 GPUs from seven vendors and find two bugs in GPUs running on mobile devices. Our results show the importance of scaling previous MCS testing strategies in order to characterize the behavior of different devices, perform conformance testing, and design application testing strategies. **ACKNOWLEDGMENTS** We thank the reviewers whose feedback helped strengthen the paper and motivated the lessons learned section. We thank Jan-Harald Frederiksen from Arm for working with us to confirm the bug on Arm’s devices, and Jeff Bolz from NVIDIA for finding and confirming the bug in NVIDIA’s compiler. We thank David Neto and Alan Baker from Google for feedback on the description of the WebGPU memory model and our results analysis. We thank everyone who submitted anonymous data for this study, including friends and family. This work was supported by a gift from Google. REFERENCES Received 2023-02-16; accepted 2023-05-03
{"Source-Url": "https://reeselevine.github.io/assets/pdf/gpuharbor.pdf", "len_cl100k_base": 12966, "olmocr-version": "0.1.53", "pdf-total-pages": 13, "total-fallback-pages": 0, "total-input-tokens": 42599, "total-output-tokens": 15272, "length": "2e13", "weborganizer": {"__label__adult": 0.0006203651428222656, "__label__art_design": 0.0008115768432617188, "__label__crime_law": 0.0003559589385986328, "__label__education_jobs": 0.0009870529174804688, "__label__entertainment": 0.0001697540283203125, "__label__fashion_beauty": 0.0003094673156738281, "__label__finance_business": 0.00032901763916015625, "__label__food_dining": 0.00044035911560058594, "__label__games": 0.001430511474609375, "__label__hardware": 0.00748443603515625, "__label__health": 0.0007791519165039062, "__label__history": 0.0006585121154785156, "__label__home_hobbies": 0.00020873546600341797, "__label__industrial": 0.000949859619140625, "__label__literature": 0.0004286766052246094, "__label__politics": 0.0003714561462402344, "__label__religion": 0.0010557174682617188, "__label__science_tech": 0.2374267578125, "__label__social_life": 0.0001163482666015625, "__label__software": 0.00904083251953125, "__label__software_dev": 0.73388671875, "__label__sports_fitness": 0.0005102157592773438, "__label__transportation": 0.0011568069458007812, "__label__travel": 0.0003204345703125}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 64941, 0.03995]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 64941, 0.50935]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 64941, 0.90737]], "google_gemma-3-12b-it_contains_pii": [[0, 3200, false], [3200, 9543, null], [9543, 15455, null], [15455, 20219, null], [20219, 26158, null], [26158, 31058, null], [31058, 36286, null], [36286, 40988, null], [40988, 46929, null], [46929, 53010, null], [53010, 59573, null], [59573, 64523, null], [64523, 64941, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3200, true], [3200, 9543, null], [9543, 15455, null], [15455, 20219, null], [20219, 26158, null], [26158, 31058, null], [31058, 36286, null], [36286, 40988, null], [40988, 46929, null], [46929, 53010, null], [53010, 59573, null], [59573, 64523, null], [64523, 64941, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 64941, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 64941, null]], "pdf_page_numbers": [[0, 3200, 1], [3200, 9543, 2], [9543, 15455, 3], [15455, 20219, 4], [20219, 26158, 5], [26158, 31058, 6], [31058, 36286, 7], [36286, 40988, 8], [40988, 46929, 9], [46929, 53010, 10], [53010, 59573, 11], [59573, 64523, 12], [64523, 64941, 13]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 64941, 0.14286]]}
olmocr_science_pdfs
2024-12-09
2024-12-09
36f5336233b8a2396b2701627306cb9384c18070
Don’t Read Too Much Into It: Adaptive Computation for Open-Domain Question Answering Yuxiang Wu Sebastian Riedel Pasquale Minervini Pontus Stenetorp University College London {yuxiang.wu,s.riedel,p.minervini,p.stenetorp}@cs.ucl.ac.uk Abstract Most approaches to Open-Domain Question Answering consist of a light-weight retriever that selects a set of candidate passages, and a computationally expensive reader that examines the passages to identify the correct answer. Previous works have shown that as the number of retrieved passages increases, so does the performance of the reader. However, they assume all retrieved passages are of equal importance and allocate the same amount of computation to them, leading to a substantial increase in computational cost. To reduce this cost, we propose the use of adaptive computation to control the computational budget allocated for the passages to be read. We first introduce a technique operating on individual passages in isolation which relies on anytime prediction and a per-layer estimation of an early exit probability. We then introduce SKYLINE BUILDER, an approach for dynamically deciding on which passage to allocate computation at each step, based on a resource allocation policy trained via reinforcement learning. Our results on SQuAD-Open show that adaptive computation with global prioritisation improves over several strong static and adaptive methods, leading to a 4.3x reduction in computation while retaining 95% performance of the full model. 1 Introduction Open-Domain Question Answering (ODQA) requires a system to answer questions using a large collection of documents as the information source. In contrast to context-based machine comprehension, where models are to extract answers from single paragraphs or documents, it poses a fundamental technical challenge in machine reading at scale (Chen et al., 2017). Most ODQA systems consist of two-stage pipelines, where 1) a context retriever such as BM25 (Robertson, 2004) or DPR (Karpukhin et al., 2020) first selects a small subset of passages that are likely to contain the answer to the question, and 2) a machine reader such as BERT (Devlin et al., 2019) then examines the retrieved contexts to extract the answer. This two-stage process leads to a computational trade-off that is indicated in Fig. 1. We can run computationally expensive deep networks on a large number of passages to increase the probability that we find the right answer (“All Layers, All Passages”), or cut the number of passages and layers to reduce the computational footprint at the possible cost of missing an answer (“6 Layers, Top-2 Passages”). We hypothesise that a better accuracy-efficiency trade-off can be found if the computational budget is not allocated statically, but based on the complexity of each passage, see “Adaptive Computation” in Fig. 1. If a passage is likely to contain the answer, allocate more computation. If it isn’t, allocate less. The idea of conditioning neural network computation based on inputs has been pursued in previous work on Adaptive Computation (Bengio et al., 2015; Graves, 2016; Elbayad et al., 2020), however how to apply this idea to ODQA is still an open research question. In this work, we introduce two adaptive computation methods for ODQA: TOWER BUILDER and SKYLINE BUILDER. TOWER BUILDER builds a tower, a composition of transformer layers on a single passage, until an early stopping condition is met—we find that this method already helps reducing the computational cost required for reading the retrieved passages. Then, for coordinating the construction of multiple towers in parallel, we introduce a global method, SKYLINE BUILDER, that incrementally builds multiple towers one layer at a time and learns a policy to decide which tower to extend one more layer next. Rather than building single transformer towers in isolation, it constructs Figure 1: Static and adaptive computation for Open-Domain QA. Each block represents one layer of transformer computation on a passage. The solid arrows show how activations flow, and the dashed arrows indicate the order of computation. Only passage 10 contains the actual answer. Using all layers on all passages can find the answer, while processing only the top 2 retrieved passages with 6 layers is unable to find it. Adaptive computation can find the right passage, and allocates most computation budget to reading it. a skyline of towers with different heights, based on which passages seem most promising to process further. Our experiments on the SQuAD-Open dataset show that our methods are very effective at reducing the computational footprint of ODQA models. In particular, we find that SKYLINEBUILDER retains 95% of the accuracy of a 24-layer model using only 5.6 layers on average. In comparison, an adaptation of the method proposed by Schwartz et al. (2020) requires 9 layers for achieving the same results. Improvements are even more substantial for smaller number of layers—for example, with an average of 3 layers SKYLINEBUILDER reaches 89% of the full performance, whereas the approach of Schwartz et al. (2020) yields 57% and a model trained to use exactly 3 layers reaches 65%. Finally, SKYLINEBUILDER retains nearly the same accuracy at full layer count. To summarise, we make the following contributions: 1) we are the first to explore adaptive computation for ODQA by proposing two models: TOWERBUILDER and SKYLINEBUILDER; 2) we experimentally show that both methods can be used for adaptively allocating computational resources so to retain the predictive accuracy with a significantly lower cost, and that coordinating the building of multiple towers via a learned policy yields more accurate results; 3) when compared to their non-adaptive counterparts, our proposed methods can reduce the amount of computation by as much as 4.3 times. 2 Background We first give an overview of ODQA and the relevant work in adaptive computation. 2.1 Open Domain Question Answering In ODQA we are given a natural language query $q$ and a large number of passages $C$—for example, all paragraphs in Wikipedia. The goal is to use $C$ to produce the answer $y$. In extractive ODQA this answer corresponds to a span in one of the documents of $C$. The corpus $C$ can be very large, and a common approach to reduce computational costs is to first determine a smaller document set $D_q \subseteq C$ by retrieving the most relevant $n$ passages using an information retrieval module. Then we run a neural reader model on this subset. In most works, the reader model extracts answers by applying a per-passage reader to each input passage $x_1, \ldots, x_n \in D_q$ and then apply some form of aggregation function on the per-passage answers to produce a final answer. Note that the passage reader can either produce an answer span as output, or NoAnswer in case the passage does not contain an answer for the given question. 2.2 Transformers for ODQA Most current ODQA models rely on transformer-based architectures (Vaswani et al., 2017), usually pre-trained, to implement the PReader passage reader interface. In such models, an input passage is processed via a sequence of transformer layers; in the following, we denote the $i$-th transformer layer in the sequence as $\text{TransformerLayer}_i$. Let h_i be the input to the i-th transformer layer and h_{i+1} = \text{TransformerLayer}_i(h_i) its output. We set h_1 = x to be the input passage. In standard non-adaptive Transformer-based models, we incrementally build a tower—a composition of Transformer layers—until we reach some pre-defined height n and use an output layer to produces the final output, y = \text{OutputLayer}(h_n). In this work, due to efficiency reasons, we restrict ourselves to pre-trained ALBERT (Lan et al., 2020) models. One critical property of these models is parameter tying across layers: \text{TransformerLayer}_i(h) = \text{TransformerLayer}_j(h) for any i, j. ### 2.3 Adaptive Computation Our goal is to early-exit the iterative layer-by-layer process in order to save computation. We assume this can be happening adaptively, based on the input, since some passages might require less computation to produce an answer than others. Schwartz et al. (2020) show how this can be achieved for classification tasks. They first require internal layers to be able to produce outputs too, yielding an anytime algorithm.\(^1\) This can be achieved with a suitable training objective. Next, for each candidate layer i, they calculate the exit probability given its hidden state h_i, and use them for taking an early-exit decision: if the highest exit probability is above a global threshold \(\tau\), they return \text{OutputLayer}(h_i) otherwise they continue with the following layers. The output layer probabilities are not calibrated for exit decisions, and hence Schwartz et al. (2020) tune them on a held-out validation set via temperature calibration (Guo et al., 2017; Desai and Durrett, 2020), where a temperature \(T\) is tuned to adapt the softmax output probabilities at each layer. ### 3 Adaptive Computation in ODQA Our goal is to incrementally build up towers of transformer layers for all passages in \(D_q\) in a way that minimises unnecessary computation. Our algorithms maintain a state, or skyline, \(S = (H, A)\), consisting of current tower heights \(H = (h_1, \ldots, h_n)\), indicating how many layers have been processed for each of the \(n\) towers, and the last representations \(A = (a_1, \ldots, a_n)\) computed for each of the towers. We want to build up the skyline so that we reach an accurate solution fast and then stop processing. #### 3.1 Early Exit with Local Exit Probabilities Our first proposal is to extend the method from Schwartz et al. (2020) in order to build up the skyline \(S\). In particular, we will process each passage \(x_i \in D_q\) in isolation, building up height \(h_i\) and representation \(a_i\) until an exit probability reaches a threshold. For Schwartz et al. (2020) the exit probability is set to be the probability of the most likely class. While ODQA is not a classification problem per se, it requires solving one as a sub-step, either explicitly or implicitly: deciding whether a passage contains the answer. In turn, our first method \textsc{TowerBuilder}, uses the probability \(1 - \text{HasAnswer}(a_i)\) of the passage not containing the answer to calculate the exit probability at such given layer. In practice the probability \(\text{HasAnswer}(a_i)\) is calculated as the Sigmoid output of an MLP applied the representation of the CLS token in \(a_i\). Moreover, models are trained to produce \text{HasAnswer} probabilities for each layer using a per-layer loss. Following Schwartz et al. (2020), we also conduct temperature calibration for the \text{HasAnswer} modules using the development set. When building up the towers, \textsc{TowerBuilder} produces early exit decisions for each tower in isolation. Once all towers have been processed, the method selects the highest \(m\) towers in the final \(S^*\) to produce the final answer, where \(m\) is a hyperparameter. Since some of the selected towers in \(S^*\) may not have full height, we will need to continue unrolling them to full height to produce an answer. We will call this the \textsc{LastLayer} strategy. Alternatively, we can return the solution at the current height, provided that we use an anytime model not just for \text{HasAnswer} predictions but also for answer extraction. We will refer to this strategy as \textsc{AnyLayer}. By default we use \textsc{LastLayer} but we will conduct ablation study of these two approaches in Section 5.3. #### 3.2 Global Scheduling We can apply \textsc{TowerBuilder} independently to each passage \(x_i \in D_q\). However, if we have already found an answer after building up one tower for a passage \(x_i\), we can avoid reading other passages. Generally, we imagine that towers that are more likely to produce the answers should be processed first and get more layers allocated to. To \(^1\)In practice, Schwartz et al. (2020) choose a subset of layers to be candidate output layers, so strictly speaking we cannot exit any time, but only when a candidate layer is reached. assess if one tower is more likely to contain an answer, we need to compare them and decide which tower has highest priority. This type of strategy cannot be followed when processing passages in isolation, and hence we consider a global multipassage view. A simple approach for operating on multiple passages is to re-use information provided to the TOWERBUILDER method and select the next tower to extend using the HasAnswer probabilities. In particular, we can choose the next tower to build up as \( j = \text{arg max}_i \text{HasAnswer}(a_i) \), and then set \( a_j \leftarrow \text{TransformerLayer}(a_j) \) and \( h_j \leftarrow h_j + 1 \) in the state \( S \). To efficiently implement this strategy we use a priority queue. Every time a tower is expanded, its HasAnswer probability is re-calculated and used in a priority queue we choose the next tower from. Once we reach the limit of our computation budget, we can stop the reading process and return the results of the highest \( m \) towers \( S^* \) as inputs to its Output phase. The two aforementioned answer extraction methods (i.e., AnyLayer and LastLayer) also apply to this method. 3.3 Learning a Global Scheduler Using HasAnswer probabilities to prioritise towers is a sensible first step, but not necessarily optimal. First, while the probabilities are calibrated, they are tuned for optimising the negative log-likelihood, not the actual performance of the method. Second, the HasAnswer probability might not capture everything we need to know about the towers in order to make decisions. For example, it might be important to understand what the rank of the tower’s passage is in the retrieval result, as higher ranked passages might be more fruitful to expand. Finally, the HasAnswer probabilities are not learnt with the global competition of priorities across all towers, so they are not optimal for comparing priorities between towers that have different heights. To overcome the above issues, we frame the tower selection process as a reinforcement learning (RL) problem: we consider each tower \( S \) as a candidate action, and learn a policy \( \pi(i|S) \) that determines which tower to expand next based on the current skyline. We present the corresponding details below. 3.3.1 Policy Our policy calculates \( \pi(i|S) \) using a priority vector \( \mathbf{p}(S) \in \mathbb{R}^n \). The priority \( p_i(S) \) of each tower \( i \) is calculated using a linear combination of the HasAnswer probability of that tower and the output of a multi-layer perceptron MLP\(_\theta \). The perceptron is parametrised by \( \theta \) and uses a feature representation \( \mathbf{f}_i(S) \) of tower \( i \) in state \( S \) as input. Concretely, we have: \[ p_i(S) = \alpha \text{HasAnswer}(a_i) + \text{MLP}_\theta(\mathbf{f}_i(S)) \] where \( \alpha \) is a learnable mixture weight. As feature representation we use \( \mathbf{f}_i(S) = [\text{HeightEmb}(h_i), \text{IndexEmb}(i), \text{HasAnswer}(a_i)] \) where the tower height \( h_i \) and index \( i \) are represented using embedding matrices \( \text{HeightEmb} \in \mathbb{R}^{1 \times d} \) and \( \text{IndexEmb} \in \mathbb{R}^{n \times d} \) respectively. When a tower is currently empty, an initial priority \( p_i^0 \) will be provided: it can either be a fixed value or a learnable parameter, and its impact is analysed in Section 5.2. Given the above priority vector, the policy simply maps per tower priorities to the probability simplex: \[ \pi(i|S) = \text{Softmax}_i(\mathbf{p}(S)). \] The parameters \((\alpha, \theta)\) introduced by this policy do not introduce much computational overhead: with embedding size \( d = 8 \) and using 32-dimensional hidden representations in the MLP, this model only introduces 1,039 new parameters, a small amount compared to ALBERT (\( \approx 18M \)). 3.3.2 Training While executing a policy, the scheduler needs to make discrete decisions as which tower to pursue. These discrete decisions mean we cannot simply frame learning as optimising a differentiable loss function. Instead we use the REINFORCE algorithm (Williams, 1992) for training our policy, by maximising the expected cumulative reward. For us, this reward is defined as follows. Let \( i^m_1, \ldots, i^m_m \) and \( S^m_1 = S_1, \ldots, S^m_m \) be a trajectory of (tower selection) actions and states, respectively. We then set the cumulative reward to: \[ R(i^m_1, S^m_1) = r(i_1, S_1) + \gamma R(i^m_{t+1}, S^m_{t+1}) \] where \( r(i_t, S_t) \) is a immediate per-step reward we describe below, and \( \gamma \) is a discounting factor. We define an immediate per-step reward \( r(i, S) \) of choosing tower \( i \) in state \( S \) as \( r = c \) where \( r = 1 \) if the selected tower contains an answer and \( r = 0 \) otherwise. \( c \in \mathbb{R}_+ \) is a penalty cost of taking a step. In our experiments, we set \( c = 0.1 \). 4 Related Work Adaptive Computation One strategy to reduce a model’s complexity consists in dynamically deciding which layers to execute during inference (Bengio et al., 2015; Graves, 2016). Universal transformers (Dehghani et al., 2019) can learn after how many layers to emit an output conditioned on the input. Elbayad et al. (2020) generalise universal transformers by also learning which layer to execute at each step. Schwartz et al. (2020); Liu et al. (2020) propose methods that can adaptively decide when to early stop the computation in sentence classification tasks. To the best of our knowledge, previous work has focused adaptive computation for a single input. We are the first to learn how to prioritise computation across instances in the context of ODQA. Smaller Networks Another strategy consists in training smaller and more efficient models. In layer-wise dropout (Liu et al., 2018), during training, layers are randomly removed, making the model robust to layer removal operations. This idea was expanded Fan et al. (2020) to modern Transformer-based models. Other methods include Distillation (Hinton et al., 2015) of a teacher model into a student model, Pruning of architectures after training (LeCun et al., 1989) and Quantisation of the parameter space (Wróbel et al., 2018; Shen et al., 2019; Zafrir et al., 2019). These methods are not adaptive, but could be used in concert with the methods proposed here. Open Domain Question Answering Most modern ODQA systems adopt a two-stage approach that consists of a retriever and a reader, such as DrQA (Chen et al., 2017), HardEM (Min et al., 2019), BERTserini (Yang et al., 2019), Multi-passage BERT (Wang et al., 2019), and PathRetriever (Asai et al., 2020). As observed by Chen et al. (2017); Yang et al. (2019); Karpukhin et al. (2020); Wang et al. (2019), the accuracy of such two-stage models increases with more passages retrieved. But it remains a challenge to efficiently read a large number of passages as the reader models are usually quite computationally costly. 5 Experiments Dataset SQuAD-Open (Chen et al., 2017) is a popular open-domain question answering dataset based on SQuAD. We partition the dataset into four subsets: training set, two development sets (dev0 and dev1), and test set, and their details are summarised in Table 1. <table> <thead> <tr> <th>SQuAD-Open</th> <th>train</th> <th>dev0</th> <th>dev1</th> <th>test</th> </tr> </thead> <tbody> <tr> <td>Size</td> <td>78,839</td> <td>4,379</td> <td>4,379</td> <td>10,570</td> </tr> <tr> <td>Hits@30</td> <td>71.2%</td> <td>72.7%</td> <td>72.1%</td> <td>77.9%</td> </tr> </tbody> </table> Table 1: Dataset sizes and retriever performances. Experimental Setup We follow the preprocessing approached proposed by Wang et al. (2019) and split passages into 100-word long chunks with 50-word long strides. We use a BM25 retriever to retrieve the top n passages for each question as inputs to the reader and the Wikipedia dump provided by Chen et al. (2017) as source corpus. Following Wang et al. (2019), we set n = 5 for training and n = 30 for test evaluations. Table 1 shows the Hits@30 results of our BM25 retriever on the dataset and they are comparable with previous works (Yang et al., 2019; Wang et al., 2019). Reader Model For all our experiments, we fine-tune a pre-trained ALBERT model (Lan et al., 2020), consisting of 24 transformer layers and cross-layer parameter sharing. We do not use global normalisation (Clark and Gardner, 2018) in our implementation, but our full system (without adaptive computation) achieves an EM score of 52.6 and is comparable to Multi-passage BERT (Wang et al., 2019) which uses global normalisation. Training Pipeline The anytime reader models are first trained on training set and validated on dev0. Then we conduct temperature calibration on dev0. For SKYLINEBUILDER, the scheduler model is trained on dev0 with the calibrated anytime model, and validated with dev1. Baselines Following Schwartz et al. (2020), we use three types of baselines: 1) the standard baseline that reads all passages and outputs predictions at the final layer, 2) the efficient baseline that always exits at a given intermediate layer for all passages, and is optimised to do so, 3) the top-k baseline that only reads the k top ranked passages and predicts the answer at their final layers. Evaluation protocol Our goal is to assess the computational efficiency of a given method in terms of accuracy vs. computational budget used. We follow Fan et al. (2020) and consider the computation of one layer as a unit of computational cost. In particular, we will assess how many layers, on average, each method builds up for each passage. Similarly to Schwartz et al. (2020), we show the accuracy-efficiency trade-off for different strategies by showing the computation cost on the x-axis, and the Exact Match (EM) score on the y-axis. 5.1 Static vs. Adaptive Computation We first investigate how adaptive computation compares to the static baselines. We will focus on a single adaptive method, SKYLINEBUILDER, and assess different adaptive variants later. Fig. 2a shows the accuracy of SKYLINEBUILDER at different budgets when compared to the standard, efficient, and top-k baselines. We note that it reaches the similar results of the static baselines with much fewer layers. In particular, it yields substantially higher performance than static methods when the computational budget is smaller than ten layers. For example, when given four layers on average, SKYLINEBUILDER achieves EM score 48.0, significantly outperforming EM score 44.2 of the top-k baseline. In Table 2 we consider a setting where SKYLINEBUILDER and the static baseline reach comparable (95%) performance of the full 24-layer model. We see that simply reducing the number of passages to process is giving a poor accuracy-efficiency trade-off, requiring 14.4 layers (or 18 passages) to achieve this accuracy. The efficient baseline fares better with 9.5 layers, but it is still outperformed by SKYLINEBUILDER, that only needs 5.6 layers on average to reach the desired accuracy. <table> <thead> <tr> <th>Method</th> <th>Avg. #layers</th> <th>Reduction</th> </tr> </thead> <tbody> <tr> <td>Standard baseline</td> <td>24</td> <td>1.0x</td> </tr> <tr> <td>Efficient baseline</td> <td>9.5</td> <td>2.5x</td> </tr> <tr> <td>Top-k baseline</td> <td>14.4</td> <td>1.7x</td> </tr> <tr> <td>TOWERBUILDER</td> <td>9.0</td> <td>2.7x</td> </tr> <tr> <td>SKYLINEBUILDER (-RL)</td> <td>6.1</td> <td>3.9x</td> </tr> <tr> <td>SKYLINEBUILDER</td> <td><strong>5.6</strong></td> <td><strong>4.3x</strong></td> </tr> </tbody> </table> Table 2: Reduction in layer computations while achieving 95% of the accuracy of the standard baseline. 5.2 Local vs. Global Models What is the impact of globally selecting which towers to extend, rather than taking early-exit decisions on a per-tower basis? To answer this question, we consider two global methods: SKYLINEBUILDER and SKYLINEBUILDER(-RL), the method in Section 3.2 that uses HasAnswer probabilities as priorities without any RL-based selection policy. We compare both to the local method TOWERBUILDER. Fig. 2b shows that, while for very low budgets TOWERBUILDER outperforms SKYLINEBUILDER(-RL), with a budget larger than 4 layers it is not the case anymore. This may be due to a tendency of SKYLINEBUILDER(-RL) spending an initial computation budget on exploring many towers—in Fig. 3 we show examples of this behaviour. It is also shown that SKYLINEBUILDER considerably outperforms both TOWERBUILDER and SKYLINEBUILDER(-RL). Along with the results in Table 2, the comparisons above indicate that 1) global scheduling across multiple towers is crucial for improving efficiency, and 2) optimising the adaptive policy with RL manage to exploit global features for tower selection, leading to further improvements. 5.3 Ablation Studies Any Layer vs. Last Layer Model For comparing the LastLayer and the AnyLayer strategies introduced in Section 3.1, we show the behaviour of these methods for the SKYLINEBUILDER scheduling algorithm in Fig. 2c. Using an anytime answer extraction model has a negative effect on accuracy. We see this clearly at 24 layers where AnyLayer lags substantially behind the standard baseline while LastLayer almost reaches it. We see this gap across the whole budget spectrum, leading to less accurate results except for very small budgets. Learning Initial Priorities SKYLINEBUILDER uses a learnt initial priority for each tower. This not only enables it learn which towers to process first at the beginning, but also how long to wait until other towers are visited. Fig. 2d shows the benefit gained from adopting this strategy: without training the initialisation priorities, SKYLINEBUILDER spend more computation on passages that are likely not needed. Once an average of 4 layers have been added, the benefit disappears as SKYLINEBUILDER with learnt initial priorities will try to visit more candidates itself. 5.4 Quantitative Analysis This section aims at understanding where and how our adaptive strategies behave differently, and what contributes to the gain in the accuracy-efficiency trade-off. We propose the following quantitative metrics: 1) \( \text{Var}(h) \): variance of the heights of the towers. 2) \( \text{Avg(rank)} \): average rank of the tower when the method chooses which tower to build on. 3) Flips: how often does the strategy switch between towers, measuring the exploration-exploitation trade-off of a method. 4) \( h_+ - h_- \): \( h_+ \) (resp. \( h_- \)) is the average height of towers with (resp. without) an answer. Their difference measures the difference in amount of computation between passages with the answer and the ones without an answer. 5) HasAnswer Precision (HAP): how often a tower selection action selects a tower whose passage contains the answer. We analyse our proposed methods along with static baselines on the SQuAD development set; results are outlined in Table 3. Overall, the higher the HasAnswer Precision, the more accurate the method. This finding matches with our intuition that, if a tower selection strategy can focus its computation on passages that contain the answer, it yields more accurate results with smaller computation budgets. --- Table 3: Quantitative analysis on SQuAD Open dev1 set with top 30 passages and two layers of computation per passage on average. <table> <thead> <tr> <th>Method</th> <th>( \text{Var}(h) )</th> <th>( \text{Avg(rank)} )</th> <th>Flips</th> <th>( h_+ - h_- )</th> <th>HAP</th> <th>Exact Match</th> </tr> </thead> <tbody> <tr> <td>Efficient Baselines</td> <td>0.00</td> <td>14.50</td> <td>-</td> <td>0.00</td> <td>6.1%</td> <td>23.47</td> </tr> <tr> <td>TOWERBUILDER</td> <td>11.05</td> <td>13.38</td> <td>-</td> <td>3.68</td> <td>22.0%</td> <td>17.10</td> </tr> <tr> <td>SKYLINEBUILDER(-RL)</td> <td>7.46</td> <td>13.06</td> <td>13.37</td> <td>3.46</td> <td>27.4%</td> <td>27.95</td> </tr> <tr> <td>SKYLINEBUILDER</td> <td>12.71</td> <td>8.78</td> <td>6.48</td> <td>5.99</td> <td>40.5%</td> <td><strong>33.60</strong></td> </tr> </tbody> </table> Comparing SkylineBuilder(-RL) and SkylineBuilder gives more insights regarding what the RL training scheme learns. SkylineBuilder learns a policy with the highest Var($h$), the lowest Avg(rank), and the lowest number of tower flips, suggesting that it focuses on a few towers rather than distributing its computation over all passages, 2) it is more likely to select top-ranked passages, and 3) it switches less between towers, and tends to build one tower before switching to another. SkylineBuilder also yields the highest HasAnswer Precision and $h_+ - h_-$, meaning that it tends to prioritise the passages containing the answer. 5.5 Qualitative Analysis and Visualisation Here we analyse how different methods build the skyline. Fig. 3 shows some examples of skylines built by SkylineBuilder(-RL) and SkylineBuilder. The towers are ordered by the rank of their associated passages in the retrieval results from left to right, and are built bottom-up. The colour gradient of the blues blocks reflects the order in which the layers are built: darker cells correspond to layers created later in the process. In Fig. 3a and Fig. 3b we can see that SkylineBuilder tends to focus on one or two towers, whereas SkylineBuilder(-RL) has a more even distribution of computation across different towers. In Fig. 3b, even when only one tower contains the answer, SkylineBuilder manages to locate it and build a full-height tower on it. Fig. 3c shows a case where none of the top 4 passages contains the answer. SkylineBuilder goes over these irrelevant towers quickly and starts exploring later towers, until it reaches the tower with rank 27 and becomes confident enough to keep building on it. These examples show how SkylineBuilder learns an efficient scheduling algorithm to locate passages containing the answer with very limited budgets. To understand how our proposed methods work at macro level, we use heat-maps (Fig. 4) for showing how frequently each block is selected. The green row at the bottom indicates the frequency of each passage containing the answer. SkylineBuilder explores all passages quite evenly, whereas SkylineBuilder learns to prioritise top-ranked towers. This preference is reasonable because, as shown by the green row at the bottom, top-ranked towers are more likely to contain the answer. Also note that SkylineBuilder does not naively process towers from left to right like the top-$k$ baseline does, but instead it learns a trade-off between exploration and exploitation, leading to the significant improvement over the top-$k$ baseline shown in Fig. 2a. ### 5.6 Adaptive Computation vs. Distillation Distillation is another orthogonal approach to reduce computational cost. We compare our adaptive computation method **SKYLINEBuilder** with a static DistilBERT \cite{Sanh2019} baseline, and the results are shown in Table 4. Our method significantly outperforms DistilBERT while computing much fewer layers. <table> <thead> <tr> <th>Models</th> <th>Num. layers</th> <th>EM</th> </tr> </thead> <tbody> <tr> <td>DistilBERT \cite{Sanh2019}</td> <td>6</td> <td>40.5</td> </tr> <tr> <td><strong>SKYLINEBuilder</strong></td> <td>1.6</td> <td>41.1</td> </tr> <tr> <td><strong>SKYLINEBuilder</strong></td> <td>3</td> <td>46.4</td> </tr> <tr> <td><strong>SKYLINEBuilder</strong></td> <td>6</td> <td>49.7</td> </tr> </tbody> </table> Table 4: Comparing adaptive computation with distillation on SQuAD-Open test set. ### 6 Discussions and Future Works In this paper, we focus on reducing the number of layers and operations of ODQA models, but the actual latency improvement also depends on the hardware specifications. On GPUs we cannot expect a reduction in the number of operations to translate 1:1 to lower execution times, since they are highly optimised for parallelism.\(^3\) We leave the parallelism enhancements of **SKYLINEBuilder** for future work. We also notice that the distillation technique is complementary to the adaptive computation methods. It will be interesting to integrate these two approaches to achieve further computation reduction for ODQA models. ### 7 Conclusions In this work we show that adaptive computation can lead to substantial efficiency improvements for ODQA. In particular, we find that it is important to allocate budget dynamically across a large number of passages and prioritise different passages according to various features such as the probability that the passage has an answer. Our best results emerge when we learn prioritisation policies using reinforcement learning that can switch between exploration and exploitation. On our benchmark, our method achieves 95% of the accuracy of a 24-layer model while only needing 5.6 layers on average. --- \(^3\)When evaluated on an NVIDIA TITAN X GPU, our proposed **SKYLINEBuilder** achieves approximately 2.6x latency reduction while retaining 95% of the performance. ### Acknowledgements This research was supported by the European Union’s Horizon 2020 research and innovation programme under grant agreement no. 875160. ### References Liyuan Liu, Xiang Ren, Jingbo Shang, Xiaotao Gu, Jian Peng, and Jiawei Han. 2018. Efficient contextualized representation: Language model pruning for sequence labeling. In EMNLP, pages 1215–1225. Association for Computational Linguistics. ### A Experimental Details #### A.1 Hyper-parameters <table> <thead> <tr> <th>Hyper-parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>learning rate</td> <td>3e-5</td> </tr> <tr> <td>weight decay</td> <td>0.01</td> </tr> <tr> <td>batch size</td> <td>48</td> </tr> <tr> <td>epoch</td> <td>2</td> </tr> <tr> <td>optimiser</td> <td>Adam</td> </tr> <tr> <td>Adam $\epsilon$</td> <td>1e-6</td> </tr> <tr> <td>Adam $(\beta_1, \beta_2)$</td> <td>(0.9, 0.999)</td> </tr> <tr> <td>warmup ratio</td> <td>10%</td> </tr> <tr> <td>max sequence length</td> <td>200</td> </tr> <tr> <td>max question length</td> <td>100</td> </tr> <tr> <td>max answer length</td> <td>30</td> </tr> <tr> <td>number of passages</td> <td>5</td> </tr> <tr> <td>dropout</td> <td>0.0</td> </tr> <tr> <td>pretrained model</td> <td>albert-large-v2</td> </tr> <tr> <td>number of parameters</td> <td>18M</td> </tr> <tr> <td>device</td> <td>Nvidia Titan X</td> </tr> </tbody> </table> Table 5: Hyper-parameters for reader model training. <table> <thead> <tr> <th>Hyper-parameter</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>learning rate</td> <td>1e-3</td> </tr> <tr> <td>batch size</td> <td>32</td> </tr> <tr> <td>epoch</td> <td>16</td> </tr> <tr> <td>optimiser</td> <td>SGD</td> </tr> <tr> <td>max number of steps</td> <td>240</td> </tr> <tr> <td>step cost $c$</td> <td>0.1</td> </tr> <tr> <td>discount factor $\gamma$</td> <td>0.9</td> </tr> <tr> <td>number of passages</td> <td>30</td> </tr> </tbody> </table> Table 6: Hyper-parameters for scheduler model RL training.
{"Source-Url": "https://www.aclweb.org/anthology/2020.emnlp-main.244.pdf", "len_cl100k_base": 8652, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 38804, "total-output-tokens": 10265, "length": "2e13", "weborganizer": {"__label__adult": 0.0005922317504882812, "__label__art_design": 0.0009288787841796876, "__label__crime_law": 0.0006265640258789062, "__label__education_jobs": 0.00623321533203125, "__label__entertainment": 0.00039315223693847656, "__label__fashion_beauty": 0.0004062652587890625, "__label__finance_business": 0.0005388259887695312, "__label__food_dining": 0.0005755424499511719, "__label__games": 0.001659393310546875, "__label__hardware": 0.0011911392211914062, "__label__health": 0.00115966796875, "__label__history": 0.0005955696105957031, "__label__home_hobbies": 0.0001475811004638672, "__label__industrial": 0.0007090568542480469, "__label__literature": 0.0021610260009765625, "__label__politics": 0.0005779266357421875, "__label__religion": 0.0008955001831054688, "__label__science_tech": 0.366943359375, "__label__social_life": 0.0002810955047607422, "__label__software": 0.0279693603515625, "__label__software_dev": 0.583984375, "__label__sports_fitness": 0.0004150867462158203, "__label__transportation": 0.0006666183471679688, "__label__travel": 0.0002956390380859375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 38511, 0.04475]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 38511, 0.47499]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 38511, 0.8644]], "google_gemma-3-12b-it_contains_pii": [[0, 3906, false], [3906, 7324, null], [7324, 12266, null], [12266, 17170, null], [17170, 21642, null], [21642, 23741, null], [23741, 28127, null], [28127, 30716, null], [30716, 35171, null], [35171, 37489, null], [37489, 38511, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3906, true], [3906, 7324, null], [7324, 12266, null], [12266, 17170, null], [17170, 21642, null], [21642, 23741, null], [23741, 28127, null], [28127, 30716, null], [30716, 35171, null], [35171, 37489, null], [37489, 38511, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 38511, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 38511, null]], "pdf_page_numbers": [[0, 3906, 1], [3906, 7324, 2], [7324, 12266, 3], [12266, 17170, 4], [17170, 21642, 5], [21642, 23741, 6], [23741, 28127, 7], [28127, 30716, 8], [30716, 35171, 9], [35171, 37489, 10], [37489, 38511, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 38511, 0.27957]]}
olmocr_science_pdfs
2024-12-07
2024-12-07
f75ad7c425348c69ac6a994a5f481adc83d75952
[REMOVED]
{"len_cl100k_base": 11622, "olmocr-version": "0.1.50", "pdf-total-pages": 12, "total-fallback-pages": 0, "total-input-tokens": 52160, "total-output-tokens": 15616, "length": "2e13", "weborganizer": {"__label__adult": 0.0005125999450683594, "__label__art_design": 0.0009708404541015624, "__label__crime_law": 0.0005254745483398438, "__label__education_jobs": 0.00286865234375, "__label__entertainment": 0.00030612945556640625, "__label__fashion_beauty": 0.0003604888916015625, "__label__finance_business": 0.0005526542663574219, "__label__food_dining": 0.0005488395690917969, "__label__games": 0.001277923583984375, "__label__hardware": 0.001300811767578125, "__label__health": 0.0008020401000976562, "__label__history": 0.0005211830139160156, "__label__home_hobbies": 0.00024962425231933594, "__label__industrial": 0.0008974075317382812, "__label__literature": 0.0011644363403320312, "__label__politics": 0.0004742145538330078, "__label__religion": 0.0008144378662109375, "__label__science_tech": 0.378173828125, "__label__social_life": 0.0002129077911376953, "__label__software": 0.0202789306640625, "__label__software_dev": 0.5859375, "__label__sports_fitness": 0.0002961158752441406, "__label__transportation": 0.0006346702575683594, "__label__travel": 0.0002574920654296875}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 51459, 0.0593]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 51459, 0.45533]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 51459, 0.7903]], "google_gemma-3-12b-it_contains_pii": [[0, 4971, false], [4971, 11307, null], [11307, 17333, null], [17333, 21350, null], [21350, 25889, null], [25889, 27954, null], [27954, 32482, null], [32482, 38188, null], [38188, 43877, null], [43877, 46409, null], [46409, 49542, null], [49542, 51459, null]], "google_gemma-3-12b-it_is_public_document": [[0, 4971, true], [4971, 11307, null], [11307, 17333, null], [17333, 21350, null], [21350, 25889, null], [25889, 27954, null], [27954, 32482, null], [32482, 38188, null], [38188, 43877, null], [43877, 46409, null], [46409, 49542, null], [49542, 51459, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 51459, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 51459, null]], "pdf_page_numbers": [[0, 4971, 1], [4971, 11307, 2], [11307, 17333, 3], [17333, 21350, 4], [21350, 25889, 5], [25889, 27954, 6], [27954, 32482, 7], [32482, 38188, 8], [38188, 43877, 9], [43877, 46409, 10], [46409, 49542, 11], [49542, 51459, 12]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 51459, 0.1123]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
28c986e5ce6bc5741c255c033c9af43f32ecce31
[REMOVED]
{"Source-Url": "https://www.springer.com/cda/content/document/cda_downloaddocument/9783540764397-c1.pdf?SGWID=0-0-45-515407-p173780334", "len_cl100k_base": 10319, "olmocr-version": "0.1.50", "pdf-total-pages": 21, "total-fallback-pages": 0, "total-input-tokens": 39274, "total-output-tokens": 10775, "length": "2e13", "weborganizer": {"__label__adult": 0.0003285408020019531, "__label__art_design": 0.0002663135528564453, "__label__crime_law": 0.0002899169921875, "__label__education_jobs": 0.0008740425109863281, "__label__entertainment": 4.696846008300781e-05, "__label__fashion_beauty": 0.00014269351959228516, "__label__finance_business": 0.0001766681671142578, "__label__food_dining": 0.0002830028533935547, "__label__games": 0.0005197525024414062, "__label__hardware": 0.0006151199340820312, "__label__health": 0.00037932395935058594, "__label__history": 0.0001850128173828125, "__label__home_hobbies": 7.408857345581055e-05, "__label__industrial": 0.0002701282501220703, "__label__literature": 0.0002715587615966797, "__label__politics": 0.00018084049224853516, "__label__religion": 0.00034880638122558594, "__label__science_tech": 0.0079498291015625, "__label__social_life": 7.963180541992188e-05, "__label__software": 0.004100799560546875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.0002529621124267578, "__label__transportation": 0.0003254413604736328, "__label__travel": 0.00016021728515625}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 45922, 0.03383]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 45922, 0.61072]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 45922, 0.9346]], "google_gemma-3-12b-it_contains_pii": [[0, 2485, false], [2485, 4208, null], [4208, 7068, null], [7068, 10223, null], [10223, 12644, null], [12644, 15405, null], [15405, 17999, null], [17999, 20034, null], [20034, 23161, null], [23161, 25778, null], [25778, 28464, null], [28464, 30248, null], [30248, 31319, null], [31319, 33887, null], [33887, 35508, null], [35508, 36773, null], [36773, 38174, null], [38174, 40987, null], [40987, 43883, null], [43883, 45820, null], [45820, 45922, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2485, true], [2485, 4208, null], [4208, 7068, null], [7068, 10223, null], [10223, 12644, null], [12644, 15405, null], [15405, 17999, null], [17999, 20034, null], [20034, 23161, null], [23161, 25778, null], [25778, 28464, null], [28464, 30248, null], [30248, 31319, null], [31319, 33887, null], [33887, 35508, null], [35508, 36773, null], [36773, 38174, null], [38174, 40987, null], [40987, 43883, null], [43883, 45820, null], [45820, 45922, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 45922, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 45922, null]], "pdf_page_numbers": [[0, 2485, 1], [2485, 4208, 2], [4208, 7068, 3], [7068, 10223, 4], [10223, 12644, 5], [12644, 15405, 6], [15405, 17999, 7], [17999, 20034, 8], [20034, 23161, 9], [23161, 25778, 10], [25778, 28464, 11], [28464, 30248, 12], [30248, 31319, 13], [31319, 33887, 14], [33887, 35508, 15], [35508, 36773, 16], [36773, 38174, 17], [38174, 40987, 18], [40987, 43883, 19], [43883, 45820, 20], [45820, 45922, 21]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 45922, 0.24194]]}
olmocr_science_pdfs
2024-11-30
2024-11-30
c5fb6cf633663176594828639d0c1dbbb1621655
Table of Contents Notes from the Editor – Deepak Subramanian .................................................. 2 OWASP AppSec Asia 2011 .................................................................................. 2 Helen Gao, China AppSec 2011 .......................................................................... 2 Membership Committee ...................................................................................... 3 Congratulations OWASP ZAP – Jason Li ......................................................... 3 State of Confusion Security in State Management with ASP.NET – Tim Kulp .... 3 Unique challenge/Opportunity to fail .................................................................. 3 Client Side State Management ........................................................................... 4 Server Side State Management ........................................................................... 8 Conclusion ........................................................................................................ 10 Works Cited ....................................................................................................... 10 Projects Reboot 2012 ......................................................................................... 11 What is the OWASP Project ReBoot initiative? .................................................. 11 Current Submissions ........................................................................................ 11 Key Dates .......................................................................................................... 11 Activity types .................................................................................................... 11 Can I apply for this Reboot? ............................................................................... 12 How does funding work? .................................................................................... 12 OWASP Podcast – Hosted by Jim Manico .......................................................... 12 OWASP TOP 10 with Hacking-Lab – Martin Knobloch ...................................... 13 OWASP News – Michael Coates .......................................................................... 13 Upcoming Events ................................................................................................. 14 Global AppSec Events ....................................................................................... 14 Regional and Local Events ............................................................................... 14 Partner and Promotional Events ...................................................................... 14 Global Committees .............................................................................................. 14 ARTICLE I - OWASP Bylaws ............................................................................. 15 Organization and Barter In Trade Supporters ..................................................... 16 Academic Supporters .......................................................................................... 17 The OWASP Foundation ...................................................................................... 18 OWASP Membership .......................................................................................... 18 OWASP Membership Categories ......................................................................... 19 Other ways to Support OWASP .......................................................................... 19 Newsletter Advertising: ..................................................................................... 19 Notes from the Editor Deepak Subramanian We thank you for the overwhelming response for the previous newsletter. The submission of articles has also improved greatly. We would however like to encourage a much higher number of submissions to the newsletter. This is a call for papers and articles for the next quarterly issue due in July 2012. The submission can be done as a complete article or in stages. The preferred timeline for submission in stages is as follows: 1. Submission of abstract – 15th June 2012 2. Submission of First Draft – 30th June 2012 3. Review and submission of final draft – 20th July 2012 If you plan to submit as a complete article, the final deadline for submissions is set at 15th July 2012. Any submission that is done beyond these deadlines will be taken into consideration for the September Newsletter. The OWASP Newsletter aims to have many research publications and we welcome research articles with a great deal of enthusiasm. Any suggestions and changes to making this Newsletter better are appreciated. Email: deepak.subramanian@owasp.org OWASP AppSec Asia 2011 Helen Gao, China AppSec 2011 The OWASP AppSec Asia 2011 was successfully held in Beijing, Nov 8-11, 2011. This four day event consists of two-day conference and two-day trainings. More than four hundred people from over ten countries attended the event. OWASP Board member Sebastien Deleersnyder kicked off the conference with a call for action to participate in OWASP and contribute to a safe computing ecosystem. The topics of discussion cover many areas of application security, including cloud security, database security, encryption, secure software development, RFID security, and mitigation of XSS and other threats. Dr. Liping Ding, Dr. Frank Fan, Cassio Goldschmidt, Tobias Gondrom, Mano Paul, Wei Zhang, Dr. Meng-Chow Kang are among the speakers and trainers. During the OWASP leader workshop led by Global Chapter Committee chairman Tin Zaw, leaders from China, Korea, Malaysia and Indonesia shared their experience and ideas with those from the United States and countries from Europe and South American. This is the third year that the China chapters hosted a large scale OWASP event. For the first time, the event featured a formal product exhibition where fourteen vendors had participated. Seven media companies from inside and outside of China had covered the conference. This gathering also celebrated the tenth anniversary of OWASP, as well as the enormous growth of the OWASP network especially in Asia Pacific region. The registered members of the China chapters has increased four hundred percent last year to more than eight hundred people. The main goal of 2012 is to increase membership by at least twenty percent. We will organize global OWASP membership drives. We will continue looking for ways to create incentives for people to join. In the past, we created a number of models tailored to different organizations and individuals. Over the years, some of them have become redundant. Some have proven confusing or inefficient. We will simplify Individual Supporters, clarify Local Chapter Supporters and Single Meeting Supporter. We will also define roles and responsibilities of Barter-in-Trade, University and Government Supporters. If you think these goals are ambitious, then you are right. The Membership Committee is currently the smallest of the seven committees. If you have ever considered joining an OWASP committee, this is one in which you can make a difference. If you believe in a stable OWASP finance, if you are convinced of the importance of contributions from both individuals and organizations, and if you agree that the committee should better reflect the population of participants, then join us and make it a reality. The Membership Committee and the Connection Committee have already had a successful meeting to look into new methods of reaching out to the mainstream and technical media. In the coming weeks, we will be contacting leaders of local chapters and other committees to create plans for Corporate Supporter membership drive and Individual Supporter membership drive. Do you think the OWASP membership is worthy to you and your company? Have you or someone you know ever run a successful membership drive? We welcome your suggestions. The Membership Committee holds phone conference at noon EST on the 3rd Tuesday of each month. Meeting agenda will be polished to the mailing list prior to the meeting. You don’t have to be a committee member to attend a meeting. To join the mailing list just go to OWASP home page and click on Mailing List, then select Global_membership_committee. Or simply go to http://tinyurl.com/OWASPMembership. You can also send an email directly to helen.gao@owasp.org. Congratulations OWASP ZAP Global Projects Committee Leader - Jason Li The OWASP Newsletter in association with the Global Projects Committee congratulates the OWASP ZAP Project for winning the 2011 Toolsmith Tool of the Year Award. ZAP was featured in the November 2011 toolsmith article of the ISSA Journal. toolsmith highlights a different security-related project each month. ZAP is a great example of an OWASP Project - an open source project led by a passionate leader that helps improve the state of application security. Congrats to Simon Bennetts (aka Psiinon) on a job well done! If you haven’t checked out the OWASP ZAP Project, read about it here: https://www.owasp.org/index.php/OWASP_Zed_Attack_Proxy_Project State of Confusion Security in State Management with ASP.NET Tim Kulp Imagine being handed a package and being told that it needs to go to the post office. You dutifully take the package to the post office and hand it to the clerk. The clerk looks at you and asks, “Did you package this with bubble wrap or foam peanuts?” You think for a minute…you were just handed the package, no one told you how it was packaged. You do not know anything that happened prior to the package being put in your hands. This is the world of HTTP, a stateless protocol where each request sent between the client and server are disconnected to any prior requests. HTTP does not know what happened prior to being handed the request, only that it has data to deliver to an address. As web developers we need to compensate for HTTP’s stateless nature by building state management solutions. In this article, we will explore various state management solutions with ASP.NET and the security concerns of each. By the end of this article you will have a strong knowledge of the foibles of state management, how ASP.NET can be used for your solution and how to keep your state data safe. Unique challenge/Opportunity to fail Often developers think of state management as occurring on the client (through HTML fields, cookies, etc…) or on the server (Session variables). What they often fail to recognize is that Client-Server applications presents three environments for state management: 1. On the Client, Each of these environments presents the opportunity for security vulnerabilities to slip into your application. Attempting to build a custom state management solution that addresses all three environments is incredibly challenging and fraught with pitfalls that could snare even the most seasoned engineer. Questions like, how am I going to persist control settings between the client and server, or how can I prevent an attacker from replaying malicious state on a valid user’s account can lead to a mine field of threats, counter measures and compensating controls. A common and high impact risk for application security, according to the OWASP Top Ten, is Broken Authentication and Session Management which directly involves developers trying to build custom session/state management solutions but in the end only contributes to their application’s security issues (OWASP Top Ten Team, 2011). Custom session/state management solutions might address the immediate needs of an application but often are not comprehensive in covering topics such as associating session/state with authentication so that when a user logs out, their session information is disposed. In the end, with the constraints of project time and budget, custom state management solutions can cause more security issues than they resolve. Fortunately there are numerous development frameworks with state management solutions that have been widely tested and deeply examined. ASP.NET provides a robust state management solution. Depending on your understanding, state management in ASP.NET can be complex and confusing or simple and reliable. This article will help you to understand View State and the plethora of other tools at your fingertips for remembering information about your users. As we explore each state management tool, we will examine some of the vulnerabilities it introduces as well as counter-measures to protect your system. ## Client Side State Management With Client Side State Management, the client maintains state information sending it to the server with each request. Figure 1: Sample client side state management model (Northrup & Snell, 2010, p. 121) illustrates a sample client side state management solution for a corporate reporting application. The user has subscribed to two reports and set their personalized design theme to “SPRING”. This is transmitted to the server with each request to be consumed and processed. ![Figure 1: Sample client side state management model](Northrup & Snell, 2010, p. 121) Using client side state management allows for scalability in your applications. As the data resides on the client system, server memory can be used for processing instead of data storage. While this solution can provide increased scalability, you pay by exposing state data to the client machine. This introduces the first attack we will explore that is common to all client side state management solutions: Parameter Manipulation. Malicious users can modify parameters stored on the client to abuse the trust the system has with the client (Meier, et al., 2003). Examining Figure 1: Sample client side state management model (Northrup & Snell, 2010, p. 121), consider if a malicious user modified what report they are to receive and the server system did not validate the user had permission to the resource. The malicious user would then have access to a resource that they do not have permission to, violating the confidentiality of the resource. Parameter Manipulation will be a reoccurring topic when examining the client side state management tools offered by ASP.NET. Each control offers unique concerns and solutions for this attack. ASP.NET offers a variety of client side state management tools including: - Query String parameters - Hidden Fields - Cookies - View State - Control State Regardless of the tool that you use, the golden rule of Client Side State Management is to NEVER store sensitive information on the client (Meier, et al., 2003). Storing information like the font-family to use for a control is fine, but storing credit card information, personal health records, or anything else that is considered sensitive should remain on the server. On this same note, do not store information that is needed for making security decision on the client. Using Parameter Manipulation an attacker can elevate their privileges or perform unauthorized actions if the application relies solely on parameters stored on the client. Query String Parameters Using HTTP GET parameters, ASP.NET can pass data from the client to the server through the query string. As an example, in the following URL I can pass my product ID to the product details page: The product-details.aspx page can then load the id parameter using the following code: ```csharp int prodId = Convert.ToInt32(Request.QueryString["id"]); ``` Query String parameters are ideal for small amounts of data as many browsers restrict the URL length to just under 2,100 characters (Northrup & Snell, 2010). Passing identifiers, abbreviations, etc... Another use for Query String parameters is to permit users to bookmark requests for specific requests. For the above example, a user can send the URL to a friend to directly view product id nine or a user can quickly return to this address to see the product details again. A malicious user can easily modify query string parameters through their browser (or HTTP request transmission tool). Query String parameters are the first state management solution explored as they are the easiest example of Parameter Manipulation. When using a Query String parameter, start with the following questions: 1. What are the boundaries of this value? What is the max possible value? What is the minimum possible value? 2. What if someone passes in something I’m not expecting? These are standard input validation questions. The first represents boundary testing. For our URL example: What will happen if id=-1? Will the system crash, find nothing or display a message saying “id” is not a recognized value? Query String parameters make it very easy to manipulate parameters using tools like Fiddler (http://fiddler2.org) or Hackbar (an excellent Firefox extension by Johan Adriaans). Establish acceptable boundaries around data points and ensure that incoming data conforms to those boundaries. As an example, if some-e-com-site.com only has ten products (id 1 through 10) your application does not need to support an id number greater than ten or less than one. Question #2 above is an example of Equivalence Partitioning. This testing technique groups input into blocks of data that attempts to minimize the number of tests that need to be done. As an example, one Partition could be alpha characters. If I pass in id=A, and the system crashes due to the alpha character, I do not have to test each letter of the alphabet to verify that the system will crash. Partitions can be data outside of known boundaries (such as id=11 or id=0 when we only have id 1 through 10). By entering in data for other partitions, as a security test, we want to make sure that the system maintains expected behavior and does not leak information or show a Yellow Screen of Death (YSoD). To protect our system, we can leverage the defense in depth by passing the data through various checks prior to actually using it. Specific to ASP.NET and C# (or VB.NET) you can use the tryparse method to validate the provided data conforms to the data type to which the value will be cast. If the value provided can be cast as the expected data type (in this case int), the method returns true. Unlike the parse method, tryparse does not throw an exception if the value cannot be cast. Many data types support tryparse in C# (such as bool, DateTime, decimal, etc...) allowing developers to check data types and maintain the flow of their application without an exception. Here is a simple validation of data using tryparse: ```csharp if (Request.QueryString[«id»].Length < 3) { int prodId = 0; if (!int.TryParse(Request.QueryString["id"], out prodId)) displayMessageToScreen("Product Id must be a recognized value."); if (prodId < 1 || prodId > 10) displayMessageToScreen("Product Id is not in a valid range."); } ``` This code first checks to validate that the value of Request.QueryString["id"] is no more than three characters long. This will avoid receiving values larger than a two digit number. Next we check to see if Request.QueryString["id"] can be cast as the int data type for C# (which is an Int32). If the value can be cast as an int, the value of prodId is set to be that of the Request.QueryString["id"]. If the value cannot be cast, the displayMessageToScreen method executes which shows a friendly error message to the user (avoid the YSoD and throwing errors whenever possible, throwing is expensive from a memory perspective and the YSoD makes your site appear broken). Next we confirm that the value is in the expected boundary, again displaying an error if the value is less than 1 or greater than 10. This is a simple example but conveys the idea of how to use tryparse to check whether a value can be cast and then applying simple input validation to maintain our boundaries. Hidden Fields A long time ago I was talking to a developer who said that their data was secured because they stored it in hidden fields. Since the user could not see them, the data was secure. Unfortunately this is extremely incorrect. The data might be safe from a non-snooping, non-malicious, non-curious user but always remember that software is a tool and people like to tinker with their tools. To this end, believing that Hidden Fields provide security is subscribing to the “Security through Obscurity” fallacy. Hidden Fields are HTML input tags with the type of “hidden”. This prevents the tag from rendering as part of the web page layout but can easily be found by viewing the source for the HTML page. ASP.NET Hidden fields are built using the HiddenField control, which renders as an input type=”hidden” tag: ```html <asp:HiddenField ID="hdn1" runat="server" Value="Some String Value"/> ``` Renders as: ```html <input type="hidden" id="hdn1" name="hdn1" value="Some String Value"/> ``` The strength of Hidden Fields can be found in passing non-user friendly data back to the web page (such as a GUID) or as a data container to be used for an Ajax application that will eventually be passed back to the server in a Postback. Hidden fields can be accessed (read/write) via JavaScript making them an ideal bridge to pass data back to ASP.NET from the client for an Ajax enabled page. Besides being aware of the parameter manipulation attack, developers need to know that hidden fields do not offer any protection for the data stored in them. The value of the field can be exposed simply by viewing the page source. Using an HTML hidden field places the responsibility on the developer to ensure the data provided is managed properly. Some developers have built very interesting client side encryption systems (jCrypt by Daniel Griesser is very interesting and easy to implement) which could be used to encrypt the data in the hidden field. You can build some regular expression checks on the data, but again, you are on your own for this. One of the great strengths of ASP.NET are the Validation controls. Using the RequiredField control you can ensure that a value is provided, or using the CompareValidator you can ensure that the provided value conforms to parameters of other controls. Unfortunately, the ASP.NET validation controls do not work with the HiddenField control. All validation of hidden fields needs to be done manually. When dealing with hidden fields ensure that you are using proper input validation to keep the data safe while moving through the system. Avoid making security decisions based on any value in the hidden field and always replicate any security checks on the server against hidden field data. ### Cookies Cookies have been around for a long time on the web. Their implementation is almost ubiquitous on the web as sites leverage them for storing advertising campaign information, customer id, favorite color, the possibilities are as unique and endless as the many applications that spot the web. Unlike Query String parameters and Hidden Fields, Cookies can live long after the page is closed. A cookie’s lifetime is defined during the creation process. ```csharp HttpCookie cookie = new HttpCookie("roles"); cookie.Value = "Access Maps, Access Reports, Access Reports"; cookie.Expires = DateTime.Now.AddDays(30); Response.Cookies.Add(cookie); ``` In the code, line three dictates that the cookie will expire in thirty minutes from now (the current date and time). The value of the cookie can be any string value (which could be a serialized XML object, JSON object or string representation of some other data type). Being that a cookie can store anything, you are only limited by your imagination and a very small file size (4kb). Cookies are saved to the client system as a small text file that is only available to the website that created it. HTTP Headers transfer cookies from the client to the server making their data available to the server side C# as well as client side JavaScript. Like all client side state management solutions, cookies are susceptible to parameter manipulation. As the data is stored only in a text file, unencrypted by default, the content can be viewed, manipulated and saved for the next connection. With their transmission in the HTTP header, Cookie values can be manipulated in transmission with a tool like Fiddler or OWASP’s ZAP. A specific manifestation of parameter manipulation is manipulating a Role Cache. Sometimes a developer will cache a list of Roles to which the user belongs in an effort to reduce database communication for each time the system needs to authorize access. While on the surface this might seem like a great caching solution, a modification to the cookie can modify the user’s Role membership. As an example, in our sample cookie above imagine changing the value of the roles cookie to be “Admin”, “Administrator” or various other synonyms for administrator access. If the application recognizes one of these roles and does not validate the value, the user can elevate their privileges in the system. Another vulnerability facing cookies is Hijacking Cookies. This is when a malicious user steals the cookies from a legitimate user. Hijacking Cookies is often the de facto illustration of a Cross Site Scripting (XSS) attack with more impact than flashing an alert message. With just a few lines of code you can return the cookie content as output to malicious code: ```javascript $(document).ready(function () { $('#btn').click(function (e) { _stealTheCookie(document.cookie); }); }); function _stealTheCookie(val) { $('#output').html(val); } ``` While cookie theft might not matter when storing simplistic information like the user’s color preferences, it is critical when the developer stores sensitive information in cookies. As mentioned previously, never store sensitive information in any client side state management solution. Even when encrypting the data, anything stored on the client’s machine is in a hostile environment and open to exposure. Finally, look again at the malicious code for _stealTheCookie. It displays the value of the cookie as HTML to the browser. We know the cookie will save to the system as: ``` roles=Access Maps, Access Reports, Access Reports ``` What if I alter the cookie.value to be: ``` cookie.Value = "<iframe src='malsite.com'></iframe>"; ``` This would render an iframe to the browser and through some creative CSS can lead to a very convincing site replacement. While the user believes they are interacting with the legitimate site, they are in fact interacting with malsite.com. Again, always validate input in your application and sanitize data going out to the user. **View State** Of all the State Management solutions used in ASP.NET, View State might be the most prevalent…and least understood. By default, every ASP.NET web page carries with it a hidden field called __VIEWSTATE. This field contains a base-64 encoded value that represents the state of all the controls on the web page. View State can be very small or very large depending on the complexity of the controls on the ASP.NET page. Here is a sample of the __VIEWSTATE hidden field: ``` <input type="hidden" name="__VIEWSTATE" id="__VIEWSTATE" value="/wEPdWull7EyvMTEOMTI3OTk2FmGuU1mYysgU53teaoeeoaajf92zzLCje8JDDo0SpwDeldD/" /> ``` Many developers see the value and assume a level of “encryption” due to a lack of knowledge about View State. Just like any other hidden field, __VIEWSTATE does not provide any default protection to prevent users from reading the contents. View State’s purpose is to store information about web controls such as the value provided by a user between postbacks. This reduces the code a developer needs to write to repopulate all fields after a trip to the server and is more a tool of convenience than a security control. In many code samples I have seen custom controls (asx controls or custom control libraries) using View State as a dumping ground for all types of data including entire datasets. The golden rule of client side state management applies to View State and must be remembered when building ASP.NET controls: Never store sensitive information on the client (Meier, et al., 2003). In the case where you encounter someone’s code that has not followed the golden rule, ASP.NET can encrypt client side state management values using __VIEWSTATE. This is known as a View State Replay Attack (Baier, 2006). By capturing a valid View State through properly using your web application, attackers can craft a malicious View State for another user. The crafted View State can then be submitted by the victim through another attack such as Cross Site Scripting (XSS [https://www.owasp.org/index.php/Top_10_2010-A5]) or Cross Site Request Forgery (CSRF [https://www.owasp.org/index.php/Top_10_2010-A2]). Using a View State Replay attack, victims can submit maliciously crafted requests to a web application that will appear as legitimate traffic. Fortunately, this is very easy to mitigate using the Page.ViewStateEncryptionMode property. All ASP.NET pages have a property called ViewStateUserKey which allows the developer to place a unique seed into the View State that ties the value to a unique piece of data. Often, Session Id (from the Session object that we will examine further in the Server State Management section) is used to tie View State to a specific user session. Whether you use the Session Id, Username or hash of both, the key to ViewStateUserKey is to use some data that is available to the server and is unique to the user. Using the following line of code, you can assign the ViewStateUserKey to the Session Id of the user: ``` Page.ViewStateUserKey = Session.SessionID; ``` Later we will examine how Session Id can be hijacked which could lead to an attack circumventing this control. Consider the level of security your application needs when building your ViewStateUserKey. Is Session Hijacking a likely attack? Can you use another piece of data such as the username or a value in the user’s Profile (we will examine Profiles later as well)? Another common mistake that developers make is to assume View State is safe. View State should be treated as another input field just like a text field when dealing with the values and contents. While the View State MAC can provide a level of protection against malicious external users as developers we need to build our components to protect against the insider threat as well. A disgruntled developer could build malicious components that inject vulnerabilities into a web application. By simply processing the contents of View State, your web application could permit an angry developer the opportunity attack vector to “get back” at their employer. Always validate input into your application and sanitize anything that is returned to the user. **Control State** View State’s little brother, Control State is View State that cannot be disabled. Using Page.ViewStateEnabled, developers can turn off View State for the page or the entire application (at the configuration level). Control State is used by developers to persist values in the case that View State is disabled. Control State is stored in the __VIEWSTATE hidden field and thus, from a security perspective, is an extension of View State. By properly securing View State and validating input, Control State will be secured as well. Server Side State Management While the client provides numerous state management options, the server offers different options with their own unique challenges. In server side state management, (depending on the technique used) some information is provided to identify the user to the server. The server stores and manages the state information. This solution avoids the Parameter Manipulation risks that client side state management carries but has its own challenges and opportunities for your application. State can be maintained on the server with ASP.NET using the following objects: - Application - Session - Profile - View State (View State can exist on the server) Each object carries its own security baggage usually related to access and scope. For access you need to be aware of how your application will access the various state objects. Scope refers to properly scoping the data so that you do not provide content outside of the necessary scope. We will examine each of these as they apply to the various server side state objects. **Application** Application is used to store information that is needed for all users of an ASP.NET application. The data is stored in memory allowing quick storage and retrieval. This object is ideal for storing small amounts of data that does not change from user to user (such as a default ID value). The Application object is a key value pair that is instantiated when the IIS Web Application starts and is lost when the application stops. Use of the Application object should be limited to small amounts of data that are global in nature. Improperly scoping data can lead to an Information Disclosure vulnerability. In the instance where user specific information is stored in the Applications object (which it should never be because the Application object is specifically not for user specific information) it would be available to any user of the web application. While this is an example of the improper use of an object leading to a security vulnerability, it is an excellent example of how a simple mistake can lead to exposing your user’s data to anyone else using the application. When using the Application object, ensure that the data being stored is not specific to a user. **Session** The Session object is a key value pair that is associated with an individual user. Unlike the Application object, Sessions are not global in scope. Users are identified by the server using a SessionID value that is passed to the client. This ID is transmitted with each request and then used by the server to determine what (if any) data is stored for the user in the Session data store. Sessions are active as long as requests are sent to the server within the specified Session lifetime. This allows Sessions to expire after a specified time period. By default, SessionID is stored in a cookie on the client system. ASP.NET does support another option for SessionID communication between the client and server: through the URL. This transmits the SessionID as a part of the URL (using URL rewriting). By adding the following element to your web.config you can configure your ASP.NET application to pass SessionID through the URL: ```xml <sessionState cookieless="true"/> ``` This will yield a URL that appears as follows: http://localhost /S(tmuwrs2ubkjgni4ulrznncy)/default.aspx Notice the tmuwrs2ubkjgni4ulrznncy this is the SessionID. ASP.NET will add the session ID in to each URL that is processed by the application without any effort by the developer. This is a great solution for client systems that do not accept cookies but leads to numerous security challenges. Once the SessionID is presented to the client computer, it is susceptible to Parameter Manipulation attacks. This is true for the cookie as well as the URL delivery method. While the SessionID is randomly generated (MSDN, 2011) and not easily guessed, it can be altered to another valid session ID that an attacker harvests through a capturing the clear text transmission of the SessionID. Using a network monitoring system like WireShark, attackers can collect SessionIDs that are transmitted in clear text over the network. After collecting the SessionID, the attacker can simply modify their SessionID to reflect that of another user exposing any sensitive data that is stored in the Session object. To avoid exposing the SessionID, avoid clear text transmission using encrypted communication like SSL/TLS. Another security challenge opened by using cookieless Sessions is the recycling of SessionIDs. By default, if a SessionID is submitted via a URL ASP.NET will create a session with the provided ID. This could lead to two users having the same SessionID and result in an Information Disclosure vulnerability in the system. By sending the URL with the SessionID embedded in it via email or search engine, another user can capture any data stored in another user’s Session. To replicate this vulnerability, open a website that uses cookieless sessions (or create one yourself [http://seccode.blogspot.com/2012/03/cookieless-sessions-with-aspnet.html]). Copy the URL with the Session ID to another browser (this is essence should create a new Session) and load the page. Notice that any preset Session variables will follow you to the new browser. ASP.NET Session state provides numerous challenges to maintain security in your application and while no solution can be 100% secure, we can add layers of defense to make session compromise more difficult. First and foremost: **Housekeeping**. Make sure that you abandon sessions (using the Session.Abandon() method) when the user logs out or leaves (if possible) your application. Combining the abandon method with the regenerateExpiredSessionId configuration setting will drastically reduce your exposure to session hijacking through sharing a Session ID. The regenerateExpiredSessionId attribute of the sessionState configuration element is set to true by default. This setting ensures that when a client attempts to use an old Session ID, a new one is generated instead of the supplied one. Using this setting with the practice of abandoning unused sessions reduces the time when a Session ID can be compromised to when the Session ID is in active use. While not bullet proof, this combination of configuration and best practice does reduce your possible attack surface (from a temporal point of view). Next, consider signing the Session ID with information that is specific to individual users. By accompanying the Session ID with a machine authentication code (MAC) you can add an extra layer of verification that the user supplying the Session ID value, is the one who it was originally assigned to. In his article Foiling Session Hijacking Attempts [http://technet.microsoft.com/en-us/query/cc300500], Jeff Prosise examines adding a MAC to the Session ID composed of User Agent, network address of the user’s IP and the Session ID. While noted that each of these values could be spoofed, the concept is sound in that using a MAC as another validity check on the incoming session information can make a session more difficult to hijack. In the article, Jeff uses an ASP.NET Module to capture all incoming requests to validate the supplied session MAC. If the MAC is not valid, the application logs the attempted hijack and denies access. Finally, consider a secure transmission of Session ID values (via SSL). Exposing the Session ID through the URL or Cookie can lead to numerous hijacking vulnerabilities discussed above. By encrypting the data in transmission you can reduce the possibility of capture a Session ID in transmission. **Profile** So far all the state management solutions we have examined have been temporary. Session state expires, client side methods depend on a specified life span, and even Application state can be lost with an IIS restart. Profile provides a persistent and enduring state management tool that allows users to store state information in your application that will be waiting for them on their next visit. Profiles are associated with individual users and stored according to the Profile Provider. By default, ASP.NET uses the SqlProfileProvider to store profile information into a Microsoft SQL Server or SQL Express database. Examination of the ASP.NET Provider Model is beyond the scope of this article but more information can be found on Microsoft’s Developer Network (MSDN [http://msdn.microsoft.com/en-us/library/014bec1k.aspx]). The core concept of the Provider Model is to be able to easily manage and configure commonly reused functionality in web applications. For the SqlProfileProvider, this means defining how to store the Profile information in a SQL Server or SQL Express database. You can build your own custom Profile Provider for storing data in Oracle, SQLite, XML or any data format that you want. This introduces the first vulnerability in ASP.NET Profiles, improperly built Providers. Ambitious developers hear about the Provider Model and instantly want to get their hands dirty in building their own. Just like Broken Authentication Schemes, a broken Provider can open numerous vulnerabilities in your system. Use the existing Providers from ASP.NET when possible to reduce the risk of an insecure, custom developed Provider. If you need to build your own, review the produced code carefully to ensure that it follows secure code best practices, avoids known attack patterns (like SQL Injection) and fails to a safe state. Profiles use an extremely flexible implementation scheme. In the web.config file, the developer specifies what Profile fields are available by adding elements to the Profile.Properties object: ``` <profile> <providers> <clear/> <add name="FavoriteColor" allowAnonymous="false" type="System.String" defaultValue="Blue"/> </providers> <properties> ... </properties> </profile> ``` In this example, the Profile object will have a property name “FavoriteColor” that is a string, is “Blue” by default and is not available to Anonymous users (meaning that users must be authenticated to access this Profile Property). Assigning values to the Profile Property is as easy as: ``` Profile.FavoriteColor = TextBox1.Text; Profile.Save(); ``` This will set FavoriteColor to equal whatever was provided in the Textbox1 control’s (ASP.NET Textbox) Text property. Following with the Save method will write the Profile to the data store (as defined in the Provider) for later use. While working with the Profile can be very simple, you must remember to validate data going in to it. Profile can provide an attack vector for storing malicious input for later use/display in the application. Remember to always validate user input on the server side. Some sites use Profile data constantly to drive the application. FavoriteColor in this example might be used to define a CSS string that modifies the background of the web site to be whatever the user enters. To ease the network traffic, sometimes Profiles will be stored on the client as a Cookie. While FavoriteColor does not expose critical information, other common Profile data points like First Name, Last Name, Date of Birth, etc... can expose the user’s data on the client. When using client side storage to cache Profile data, be conscience of what data is being stored and how you are storing them. By default, Profiles stay between the server and database (rendering occurs through some other ASP.NET control). Keep sensitive data away from the client as this is a general state management best practice. ViewState (again?) Yes, View State can be held on the server. Where ASP.NET stores View State is defined by inheriting an abstract class called the PageStatePersister. This class is used by a host of other classes such as HiddenFieldPageStatePersister which implement how View State is stored on the Page. The default PageStatePersister is the HiddenFieldPageStatePersister which stores View State as a hidden field on the client using base-64 encoding for the output. Other PageStatePersister include the SessionPageStatePersister that stores View State in the Session object. MSDN provides an excellent article on how to build a PageStatePersister using a System.IO.Stream [http://msdn.microsoft.com/en-us/library/system.web.ui.pagestatepersister.aspx]. Building your own Persister opens your code up to the same security challenges of building a custom Provider. False assumptions and insecure code will leave your application vulnerable and possibly, leave your View State exposed. When building a custom Persister, review the code frequently to ensure that best practices are being followed and that you do not accidently create a vulnerability while trying to move View State off the client. As you can imagine, removing View State from the client holds numerous benefits with security and performance of the system. From a security perspective, you are removing the exposure of View State to the client which can mitigate Parameter Manipulation and View State Replay attacks. By taking the View State away from the client, you remove the control an attacker can have on the data stored in it. When considering where to store the View State information, think of scope, access and persistence. For scope, storing View State as a value in the Application object would scope the data to everyone using the application. This is clearly a bad idea as it opens a single View State to many users. How will you store View State so that user has access to it and replay attacks are not possible? Access follows a similar line of thinking in that you want to ensure that only the user has access to their View State. If you write all View State information to a text file that is available to anyone who knows the address, you are exposing your View State data to others. Does your solution only grant access to the View State for the user who owns it? Finally, you must consider how long you want to keep the View State information in your system. If you persist View State to a database is there a cleaning process that removes old View States? How will you ensure that only active View State information can be processed by your solution? Answering these questions can assist you in building a secure custom PageStatePersister object and avoid possible pitfalls in implementation that could lead to unintended Information Disclosure. Conclusion ASP.NET offers many state management tools, each with a purpose and intended use. With so many options developers can become confused as to when to use which. Using the wrong state management tool can make HTTP run malicious input to your server or lead to attackers stealing your user’s data. Building secure state management requires an understanding of the challenges and opportunities each tool brings to your application. ASP.NET has done most of the heavy lifting for you related to state management. As a developer you must now piece these together and lock them down to keep your user’s data safe. In this article we explored numerous ASP.NET state management tools and the security concerns of each. We focused on Client Side and Server Side Management, their strengths and weaknesses and how to layer your defenses to allow each to do their job. By applying the practices and asking the questions presented within the article, you can build a powerful and more secure state management system for your next ASP.NET application. Works Cited Projects Reboot 2012 What is the OWASP Project ReBoot initiative? OWASP needs to refresh, revitalize & update its projects. We need to make the software development community more aware of our efforts and demonstrate the foundations library of solutions & guidance designed to help with the secure application development lifecycle. The proposal for this initiative is here: Project Re-Boot Proposal Project Lead: Eoin Keary Proposal Approval Team: Jim Manico, Rahim Jina, Tom Brennan To that end we have a budget to fund various project related activities. The expected outcome of this initiative is to deliver some great high quality material which can be used to support software developers and testers for years to come: Current Submissions **OWASP Application Security Guide For CISOs** **OWASP Development Guide** **Zed Attack Proxy** **OWASP Cheat Sheets** **OWASP AppSensor** **OWASP Mobile Project** Key Dates Submission closing date: July 30th 2012 First round of proposal selection: 15 June 2012 Second round of proposal selection: 10 Aug 2012 Activity types Type 1: Update, rewrite & complete guides or tools. This “type” is aimed at both existing and new tools or guides which require development effort to update, augment, rewrite, develop in order to achieve a high quality release quality product. Examples: 1. “Mini” Project based summits: Expenses associated with getting global workshops, with the aim of releasing a new version of a project. 2. Paying contributors for their time and effort. 3. Paying for user guides etc to be professionally developed (technical writing etc). Type 2: Market, Training, Awareness, increase adoption. Existing, healthy robust tools and guides can utilise Type 2 activities to help with creating awareness and increasing adoption of that project. Examples: 1. Assisting with expenses associated with marketing a project. 2. Costs facilitating OWASP project focused training and awareness events Donate to help save a current or future software application Can I apply for this Reboot? You certainly can, assuming you are an OWASP member. If you feel your project is ready or has potential you can apply for the reboot programme. How does funding work? Type 1: Funding can be applied for as required if travel/mini summit etc is to be expensed as part of the reboot. Development activities; payment to contributors shall be at 50% and 100% milestones. Milestones are agreed prior to project reboot initiation. Once the 50% milestone is reached the work done to date shall be reviewed by a member of the GPC and also another nominated OWASP reviewer (generally an OWASP leader). Type 2: Funding is supplied as required. Items to be funded are agreed prior to reboot initiation. Invoices for the required services are sent directly to the foundation for payment. How do I apply? Send in a proposal with the following information: 1. Project name and description. Including reboot project lead and any team members. 2. Reboot type (Type 1 or Type 2) 3. Goals of the reboot 4. Timeline for the 50% milestone and the 100% milestone. Suggested milestone reviewers (Generally OWASP Leaders or other industry experts) 5. Budget required and how you shall spend it. Want to support this initiative or learn more? Contact Eoin Keary OWASP Podcast OWASP Podcast series is hosted by Mr. Jim Manico and features a wide variety of security experts. This week we feature Troy Hunt, a Microsoft MVP involved in.Net Security. Podcast Link: https://www.owasp.org/download/jmanico/owasp_podcast_91.mp3 OWASP Global Education Committee (GEC) and Hacking-Lab have embarked in a joint educational project: Academy Portal and Hacking-Lab’s remote security lab. While passive learning methods are generally acceptable to achieve lower levels of performance, but an interactive learning environment will allow the learner to achieve higher levels of performance (i). OWASP Academy Portal https://www.owasp.org/index.php/OWASP_Academy_Portal_Project Since its launch at AppSec US in Minneapolis 2011, the portal has seen more than 6000 active global users and more than 1072 individuals have assigned to the free OWASP TOP 10 challenges. Currently, the user with the nickname “bashrc” is leading the scoring of the OWASP TOP 10 event. Within the last couple of months, 167 users have successfully solved the OWASP challenges. OWASP GEC team is checking submitted solutions day and night. This effort is driven through the support of the following key individuals: Martin Knobloch, Cecil Su, Steven van der Baan and Zaki Akhmad. OWASP is planning to add additional challenges. Thanks to the Greece Hackademics project, additional challenges are now ready to be used for the planned OWASP online security competition in 2012. The winner will receive a free ticket to one of the OWASP international conferences. Through the efforts of volunteers, WebGoat has been integrated into the Hacking-Lab framework during the last couple of weeks. Thanks to Nicolas Hochart from Helsinki, the major work is done and we are in the quality assurance process before making them public. The addition of the Hackademics and the WebGoat projects, will introduce more than 20 new and free challenges available to everyone looking to gain some hands-on experience. The OWASP TOP 10 is only one important area of focus. Many additional, critical security aspects needspecial attention. In response to some recent media discussions, OWASP now has additional security challenges for the Apache Struts2 security vulnerability plus the commonly unknown XML external entity attack (XXE). Apache Struts2 Tutorial: http://media.hacking-lab.com/movies/struts2/ Don’t hesitate and start exploring the hands-on exercises. Joining the OWASP TOP 10 challenges is easy. Sign-Up for a Hacking-Lab account, register for the free OWASP TOP 10 challenges and get your free xUbuntu based LiveCD that provides everything you need to get started. OWASP has created a new mailing list that is focused on bringing security information to anyone new to the security space. Have a question on a security topic? Wonder what best practices are recommended for a particular topic? Join the security101 mailing list and ask a question or help answer others! Join at the following link: https://lists.owasp.org/mailman/listinfo/security101 OWASP is starting a monthly security blitz where we will rally the security community around a particular topic. The topic may be a vulnerability, defensive design approach, technology or even a methodology. All members of the security community are encouraged to write blog posts, articles, patches to tools, videos etc in the spirit of the current monthly topic. Our goal is to show a variety of perspectives on the topic from the different perspectives of builders, breakers and defenders. https://www.owasp.org/index.php/OWASP_Security_Blitz **OWASP Confirmed Member Linkedin Group** Curious to network with other OWASP members? Want to promote to the world that you support OWASP? If you’re an OWASP member then join the confirmed member linkedin group. Note: This is a virtual badge/membership card. There aren’t any resources or discussions at this linkedin group. http://www.linkedin.com/groups?viewMembers=&gid=4342746&sik=1336166179573 https://www.owasp.org/index.php/Membership --- **Upcoming Events** ### Global AppSec Events <table> <thead> <tr> <th>Global AppSec Events</th> <th>Date</th> <th>Location</th> <th>GCC Rep</th> <th>OWASP Introduction/Keynote</th> </tr> </thead> <tbody> <tr> <td>Global AppSec Latin America 2012</td> <td>Q4 2012</td> <td>Montevideo, Uruguay</td> <td>TBD</td> <td>Tom Brennan</td> </tr> <tr> <td>OWASP AppSec ASIAPAC 2013</td> <td>Feb. 21, 2013 - Feb. 22, 2013</td> <td>Jeju</td> <td>TBD</td> <td>TBD</td> </tr> </tbody> </table> ### Regional and Local Events <table> <thead> <tr> <th>Event</th> <th>Date</th> <th>Location</th> <th>OWASP Participation</th> </tr> </thead> <tbody> <tr> <td>AppSec India 2012</td> <td>Aug. 24, 2012 - Aug. 25, 2012</td> <td>India</td> <td>Tom Brennan</td> </tr> <tr> <td>OWASP Ireland</td> <td>Sept. 4, 2012 - Sept. 6, 2012</td> <td>Dublin, Ireland</td> <td>Eoin Kearny, Tom Brennan</td> </tr> </tbody> </table> ### Partner and Promotional Events <table> <thead> <tr> <th>Event</th> <th>Date</th> <th>Location</th> <th>OWASP Participation</th> </tr> </thead> <tbody> <tr> <td>BHack Conference</td> <td>June 14, 2012 - June 17, 2012</td> <td>Belo Horizonte/MG, Brazil</td> <td>TBD</td> </tr> <tr> <td>Cyber Security, Cyber Warfare and Digital Forensics (CyberSec12)</td> <td>June 26, 2012 - June 28, 2012</td> <td>Kuala Lumpur</td> <td>TBD</td> </tr> <tr> <td>BlackHat USA</td> <td>July 25, 2012 - July 26, 2012</td> <td>Las Vegas, NV</td> <td>TBD</td> </tr> </tbody> </table> **Global Committees** ### Global Chapter Committee **Mission** To provide the support required by the local chapters to thrive and contribute to the overall mission and goals of OWASP. **Committee Chair:** Josh Sokol ### Global Conference Committee **Mission** The OWASP Global Conferences Committee (GCC) exists to coordinate and facilitate OWASP conferences and events worldwide. **Committee Chair:** Mark Bristow Global Connections Committee Mission To help the OWASP foundation communicate to the outside world in a unified and coherent way. We also assist with internal communication between different OWASP projects and committees. Committee Chair: Jim Manico Global Education Committee Mission Provide awareness, training and educational services to corporate, government and educational institutions on application security. Committee Chair: Martin Knobloch Global Industry Committee Mission The OWASP Global Industry Committee (GIC) shall expand awareness of and promote the inclusion of software security best practices in Industry, Government, Academia and regulatory agencies and be a voice for industry. This will be accomplished through outreach; including presentations, development of position papers and collaborative efforts with other entities. Committee Chair: Rex Booth Global Membership Committee Mission The Membership Committee recommends policies, procedures, and strategies for enhancing the membership in OWASP both numerically and qualitatively. The committee provides a written plan and recommends policies, procedures, and initiatives to assure a growing and vital membership organization. Committee Chair: Dan Cornell Global Projects Committee Mission To foster an active OWASP developer community, facilitate contributions from OWASP community members, provide support and direction for new projects, and encourage adoption of OWASP Projects by the global community at large. Committee Chair: Jason Li ARTICLE I - OWASP Bylaws Section 1.01: Offices The principal office of the Foundation in the State of Maryland, shall be located in County of Howard. The Foundation may have such other offices, either within or without the State of Maryland, as the Board of Directors may designate or as the business of the Foundation may require from time to time. Section 1.02: Purpose The OWASP Foundation will be the thriving global community that drives visibility and evolution in the safety and security of the world’s software. Section 1.03: Values OPEN: Everything at OWASP is radically transparent from our finances to our code. INNOVATION: OWASP encourages and supports innovation/experiments for solutions to software security challenges. GLOBAL: Anyone around the world is encouraged to participate in the OWASP community. INTEGRITY: OWASP is an honest and truthful, vendor agnostic, global community. Organization and Barter In Trade Supporters Academic Supporters - Adelphi University - Anglia Ruskin University - AUM - CIT - DAKOTA STATE UNIVERSITY - DCU - HEIG-VD - hochschule mannheim - ISRTE IUL - ISEP - IDC HERZLIYA - Efi Arazi School of Computer Science - TECNOLOGICO DE MONTERREY - Nanyang Polytechnic - RUTGERS - TRINITY COLLEGE DUBLIN - STEVENS INSTITUTE OF TECHNOLOGY - UNIVERSIDAD ORT URUGUAY - UNIVERSITY OF ZAGREB - NORTHERN KENTUCKY UNIVERSITY - Northumbria University - THE GEORGE WASHINGTON UNIVERSITY - NYU:POLY - POLYTECHNIC INSTITUTE OF NYU - FACULTAD DE INGENIERIA - Universidad de Buenos Aires - UCLA - UdeMM - Universidad de la Maritima Mercante - UT Dallas The OWASP Foundation The Open Web Application Security Project (OWASP) is an international community of security professionals dedicated to enabling organizations to conceive, develop, acquire, operate, and maintain applications that can be trusted. All of the OWASP tools, documents, forums, and chapters are free and open to anyone interested in improving application security. We advocate approaching application security as a people, process, and technology problem because the most effective approaches to application security includes improvements in all of these areas. OWASP is a new kind of organization. OWASP is not affiliated with any technology company, although we support the informed use of commercial security technology. Our freedom from commercial pressures allows us to provide unbiased, practical, cost-effective information about application security. Similar to many open-source software projects, OWASP produces many types of materials in a collaborative, open way. Core Values - OPEN – Everything at OWASP is radically transparent from our finances to our code - INNOVATION – OWASP encourages and supports innovation/experiments for solutions to software security challenges - GLOBAL – Anyone around the world is encouraged to participate in the OWASP community - INTEGRITY – OWASP is an honest and truthful, vendor neutral, global community OWASP is an Open community of Application Security Professionals. The opportunities to participate in the organization are limitless OWASP Membership The professional association of OWASP Foundation is a not-for-profit 501c3 charitable organization not associated with any commercial product or service. To be successful we need your support. OWASP individuals, supporting educational and commercial organizations form an application security community that works together to create articles, methodologies, documentation, tools, and technologies. A complete list of all OWASP members can be found here: https://www.owasp.org/index.php/Membership Individual Supporter - $50 USD/year - Underscore your awareness of web application software security - Receive Discounts to attend OWASP Conferences - Expand your personal network of contacts - Obtain an owasp.org email address - Allocate 40% of your membership dues to directly support your local chapter - Participate in Global Elections and vote on issues that shape the direction of the community Corporate Supporter - $5,000 USD/year - Tax deductible donation - Receive Discounts at OWASP Conferences to exhibit products/services - Opportunity to post a rotating banner ad on the owasp.org website for 30 days at no additional cost ($2,500 value) - Be recognized as a supporter by posting your company logo on the OWASP website - Be listed as a sponsor in the quarterly newsletter distributed to over 10,000 individuals - Have a collective voice via the Global Industry Committee - Participate in Global Elections and vote on issues that shape the direction of the community - Allocate 40% of your annual donation to directly support your choice of chapter and/or projects For More information on sponsorship opportunities, contact Kelly Santalucia at Kelly.santalucia@owasp.org JOIN NOW OWASP Membership Categories <table> <thead> <tr> <th>Voice During Elections</th> <th>Recognition on OWASP.org Website</th> <th>Discounts on Conferences</th> <th>Complimentary Advertising</th> <th>Recognition in Newsletter</th> <th>owasp.org email address</th> <th>Directly Support local chapter or project</th> </tr> </thead> <tbody> <tr> <td>Corporate Member</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>Individual Member</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>Government Supporter</td> <td></td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>Academic Supporter</td> <td></td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> <tr> <td>Organizational Supporter</td> <td></td> <td></td> <td>X</td> <td>X</td> <td>X</td> <td>X</td> </tr> </tbody> </table> Other ways to Support OWASP Local Chapter Supporter Organizations that are not yet interested in becoming a full Corporate Member but who have a desire to direct their support in a more regional manner may prefer to become a Local Chapter Supporter. Check with your local Chapter Leader to learn more about specific price levels for Chapter Supporters. The funds donated are divided with 90% directly supporting the OWASP local chapter and 10% to the OWASP Foundation. [Local chapter pages] Single Meeting Supporter Organizations that wish to support OWASP local chapter with a 100% tax deductible donation to enable OWASP Foundation to continue the mission. The fees are set by local chapter, so contact the chapter leader of the chapter that you want to work with. [Local chapter pages] Event Sponsorship Participate in one of our Global or Regional events by sponsoring the expo or providing tangibles to the conference attendees. [View Sponsorship Opportunities] Tax Deductible Donation The OWASP Foundation is a registered 501(c)3 in the US as well as a Not for Profit entity in Europe. As a result, your direct donation is eligible to be deducted as a charitable donation. Please contact your tax advisor for complete information. Individual Participation With over 140 active chapters globally and hundreds of OWASP Projects and millions of great ideas waiting to become projects, it would be difficult to NOT find a way to participate. All it takes to participate is a willingness to share ideas and collaborate with the key minds in the industry. Please reach out to your local chapter leader, a current project leader, or start your own! Newsletter Advertising: - 1/4 page advertisement $2000 - 1/2 page advertisement $2500 - 1/2 page advertisement + either a 30 rotating banner on the OWASP site or 10 copies of the Top 10 Books $3000 - full page advertisement $5000 - Year subscription (1 newsletter every quarter with the 1/2 page advertisement posted) $9000. Please contact Kelly.Santalucia@owasp.org or Kate.Hartmann@owasp.org for details.
{"Source-Url": "https://www.owasp.org/images/d/d2/51516_OWASP_Newsletter-May2012_proof05.pdf", "len_cl100k_base": 13372, "olmocr-version": "0.1.50", "pdf-total-pages": 19, "total-fallback-pages": 0, "total-input-tokens": 45629, "total-output-tokens": 14918, "length": "2e13", "weborganizer": {"__label__adult": 0.0003771781921386719, "__label__art_design": 0.0003628730773925781, "__label__crime_law": 0.0009312629699707032, "__label__education_jobs": 0.0012731552124023438, "__label__entertainment": 6.455183029174805e-05, "__label__fashion_beauty": 0.0001800060272216797, "__label__finance_business": 0.0006566047668457031, "__label__food_dining": 0.00022733211517333984, "__label__games": 0.0008029937744140625, "__label__hardware": 0.0007829666137695312, "__label__health": 0.0003769397735595703, "__label__history": 0.0001807212829589844, "__label__home_hobbies": 0.00010186433792114258, "__label__industrial": 0.0003020763397216797, "__label__literature": 0.0001977682113647461, "__label__politics": 0.0002522468566894531, "__label__religion": 0.0002872943878173828, "__label__science_tech": 0.01178741455078125, "__label__social_life": 0.0001308917999267578, "__label__software": 0.01386260986328125, "__label__software_dev": 0.9658203125, "__label__sports_fitness": 0.0002586841583251953, "__label__transportation": 0.0003633499145507813, "__label__travel": 0.0001819133758544922}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 68194, 0.01669]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 68194, 0.08839]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 68194, 0.87624]], "google_gemma-3-12b-it_contains_pii": [[0, 3709, false], [3709, 6380, null], [6380, 10686, null], [10686, 15142, null], [15142, 20548, null], [20548, 25719, null], [25719, 31497, null], [31497, 36224, null], [36224, 42243, null], [42243, 48646, null], [48646, 50330, null], [50330, 52275, null], [52275, 55343, null], [55343, 58295, null], [58295, 60720, null], [60720, 60764, null], [60764, 61402, null], [61402, 64623, null], [64623, 68194, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3709, true], [3709, 6380, null], [6380, 10686, null], [10686, 15142, null], [15142, 20548, null], [20548, 25719, null], [25719, 31497, null], [31497, 36224, null], [36224, 42243, null], [42243, 48646, null], [48646, 50330, null], [50330, 52275, null], [52275, 55343, null], [55343, 58295, null], [58295, 60720, null], [60720, 60764, null], [60764, 61402, null], [61402, 64623, null], [64623, 68194, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 68194, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 68194, null]], "pdf_page_numbers": [[0, 3709, 1], [3709, 6380, 2], [6380, 10686, 3], [10686, 15142, 4], [15142, 20548, 5], [20548, 25719, 6], [25719, 31497, 7], [31497, 36224, 8], [36224, 42243, 9], [42243, 48646, 10], [48646, 50330, 11], [50330, 52275, 12], [52275, 55343, 13], [55343, 58295, 14], [58295, 60720, 15], [60720, 60764, 16], [60764, 61402, 17], [61402, 64623, 18], [64623, 68194, 19]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 68194, 0.05081]]}
olmocr_science_pdfs
2024-11-29
2024-11-29
727f95682984e3ef751100a239998cf4d45fcabb
[REMOVED]
{"Source-Url": "http://eprints.mdx.ac.uk/6858/1/scpaper.pdf", "len_cl100k_base": 10722, "olmocr-version": "0.1.51", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 45850, "total-output-tokens": 12720, "length": "2e13", "weborganizer": {"__label__adult": 0.000362396240234375, "__label__art_design": 0.0003101825714111328, "__label__crime_law": 0.00029850006103515625, "__label__education_jobs": 0.0004382133483886719, "__label__entertainment": 5.495548248291016e-05, "__label__fashion_beauty": 0.000152587890625, "__label__finance_business": 0.00015079975128173828, "__label__food_dining": 0.00039839744567871094, "__label__games": 0.0004229545593261719, "__label__hardware": 0.000518798828125, "__label__health": 0.0004639625549316406, "__label__history": 0.0002081394195556641, "__label__home_hobbies": 7.11679458618164e-05, "__label__industrial": 0.00032901763916015625, "__label__literature": 0.0002815723419189453, "__label__politics": 0.0002567768096923828, "__label__religion": 0.0004944801330566406, "__label__science_tech": 0.008392333984375, "__label__social_life": 8.624792098999023e-05, "__label__software": 0.003459930419921875, "__label__software_dev": 0.98193359375, "__label__sports_fitness": 0.00029921531677246094, "__label__transportation": 0.0004472732543945313, "__label__travel": 0.0002005100250244141}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 47824, 0.01471]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 47824, 0.46423]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 47824, 0.86225]], "google_gemma-3-12b-it_contains_pii": [[0, 2293, false], [2293, 5747, null], [5747, 9033, null], [9033, 12630, null], [12630, 15542, null], [15542, 18571, null], [18571, 20930, null], [20930, 24432, null], [24432, 27531, null], [27531, 30038, null], [30038, 33234, null], [33234, 36163, null], [36163, 39414, null], [39414, 41821, null], [41821, 44801, null], [44801, 47824, null]], "google_gemma-3-12b-it_is_public_document": [[0, 2293, true], [2293, 5747, null], [5747, 9033, null], [9033, 12630, null], [12630, 15542, null], [15542, 18571, null], [18571, 20930, null], [20930, 24432, null], [24432, 27531, null], [27531, 30038, null], [30038, 33234, null], [33234, 36163, null], [36163, 39414, null], [39414, 41821, null], [41821, 44801, null], [44801, 47824, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 47824, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 47824, null]], "pdf_page_numbers": [[0, 2293, 1], [2293, 5747, 2], [5747, 9033, 3], [9033, 12630, 4], [12630, 15542, 5], [15542, 18571, 6], [18571, 20930, 7], [20930, 24432, 8], [24432, 27531, 9], [27531, 30038, 10], [30038, 33234, 11], [33234, 36163, 12], [36163, 39414, 13], [39414, 41821, 14], [41821, 44801, 15], [44801, 47824, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 47824, 0.0]]}
olmocr_science_pdfs
2024-12-03
2024-12-03
e386b63741a09ddf0a3e83cbad36419b24f1ce96
Discovering Feature Flag Interdependencies in Microsoft Office Michael Schröder TU Wien Vienna, Austria michael.schroeder@tuwien.ac.at Katja Kevic Microsoft Cambridge, UK Katja.Kevic@microsoft.com Dan Gopstein Microsoft New York, USA Dan.Gopstein@microsoft.com Brendan Murphy Microsoft Cambridge, UK Brendan.Murphy@microsoft.com Jennifer Beckmann Microsoft Redmond, USA Jennifer.Beckmann@microsoft.com ABSTRACT Feature flags are a popular method to control functionality in released code. They enable rapid development and deployment, but can also quickly accumulate technical debt. Complex interactions between feature flags can go unnoticed, especially if interdependent flags are located far apart in the code, and these unknown dependencies could become a source of serious bugs. Testing all possible combinations of feature flags is infeasible in large systems like Microsoft Office, which has about 12,000 active flags. The goal of our research is to aid product teams in improving system reliability by providing an approach to automatically discover feature flag interdependencies. We use probabilistic reasoning to infer causal relationships from feature flag query logs. Our approach is language-agnostic, scales easily to large heterogeneous codebases, and is robust against noise such as code drift or imperfect log data. We evaluated our approach on real-world query logs from Microsoft Office and are able to achieve over 90% precision while recalling non-trivial indirect feature flag relationships across different source files. We also investigated re-occurring patterns of relationships and describe applications for targeted testing, determining deployment velocity, error mitigation, and diagnostics. CCS CONCEPTS • Software and its engineering → Software configuration management and version control systems; • Mathematics of computing → Probabilistic inference problems. KEYWORDS feature flags, log analysis, causal inference, combinatorial testing 1 INTRODUCTION Feature flags, also known as “feature toggles,” “feature switches,” “feature gates,” or “change gates,” are a design pattern to conditionally enable a code path [12]. They are a popular method within the software industry to provide the capability to control functionality in released code. Developers can wrap new code with a feature flag which can then be dynamically toggled even after the software has been deployed. The value of a feature flag is evaluated at runtime and it is either queried from a remote location or determined based on parameters in the source code. Feature flags are used to run experiments in production (e.g., for A/B testing), to roll out features in a staged manner, or for emergency bug mitigation (“e-brakes”). In the case of an e-brake, a feature flag is toggled when faulty behaviour is observed such that the bug can be mitigated rapidly without releasing a new version of the software. For an example of how feature flags are used in source code, see figure 1a. While feature flags enable rapid development and deployment of software systems, they can also accumulate technical debt. Managing many feature flags is complex and conflicts can result in unexpected and sometimes disastrous behaviour, as illustrated by the failure at Knight Capital Group [15], where reusing an old feature flag created erroneous trades in the stock market over a 45-minute period and resulted in the company going from one of the largest traders in US equities to becoming bankrupt. The management and complexity of feature flags increases when flags are interdependent (figure 1b). Interdependencies arise any time flags are nested, when the dynamic runtime value of one flag determines whether or not another flag is queried. In this way, code that is far downstream from the “parent” flag can be affected, and the inclusion of additional feature flags will cause yet more interdependencies. The farther apart interdependent flags are in the source code, the more indirect their relationship can be. Developers might not even be aware that some flags are interdependent, especially if the relationship extends beyond function, module, or even process boundaries. Such unknown dependencies can be (and have been) a source of serious bugs that take a significant amount of time to resolve. One way to mitigate these bugs would be to test all possible combinations of feature flags, but this quickly becomes infeasible: for the 12000 feature flags currently active in the Microsoft Office codebase, this would amount to $7.2 \times 10^7$ testable combinations, assuming these are all simple boolean flags—which they are not. The goal of this research is to aid product teams to improve their system’s reliability by providing a way to automatically determine feature flag interdependencies in a large software system. Knowing the relationships between feature flags that exist in a codebase provides a diversity of tangible benefits: - We can reduce our test burden by targeting only known sets of interdependent feature flags for combinatorial testing. - We can use the knowledge of feature flag relationships to determine the ideal deployment velocity, the speed at which changes controlled by feature flags can be rolled out. - We can save time diagnosing failures involving feature flags by following their transitive dependencies and recognizing common interdependency patterns. - We can prevent errors by enabling developers to check for risky dependencies before toggling a feature flag. To this end, we developed a novel approach to analyze the feature flags that are currently active within the desktop Microsoft Office Suite. As stated, Microsoft Office currently contains around 12000 feature flags with different life spans. Every day feature flags are being added and removed. A challenge in studying feature flag interdependencies in a large and mature system is that feature flags can occur in code written in many different programming languages. Furthermore, over the years, numerous APIs have been written to wrap the official feature flag SDK for additional requirements. The many different ways of defining feature flags in the source code, across many different programming languages, makes it hard to use static or dynamic code analysis to determine interdependencies. The novelty of our approach is that we analyze the logs that are emitted every time a feature flag is queried in a running Microsoft Office application. Assuming feature flag queries are already being logged, the passive nature of our analysis requires no changes to the surrounding configuration infrastructure and is completely decoupled from the source code itself. We investigate the following research questions: **RQ1** How can we infer feature flag interdependencies at scale? **RQ2** What is the accuracy of our method in a real-world setting? **RQ3** Do re-occurring patterns of feature flag relationships exist? ### 2 RELATED WORK **Interdependent Feature Flags.** The problem of interdependent feature flags is one that has existed for several decades, beginning in the world of telecommunication switching [2, 9]. The modern conundrum is well described by Rahman et al. [14], “every change to trunk should be tested across all possible combinations of enabled feature toggles. This of course introduces an explosion of tests to run.” There is a common position that in practice, feature flags do not need to be exhaustively tested. Fowler [5] recommends to test only two combinations, “all the toggles that are expected to be on in the next release” and “all toggles on,” and Neely and Stolt [13] suggest that combinatorial testing can largely be ignored if the flags are independent, and these are often justified by the claim that “most feature flags will not interact with each other” [8]. In the case of Microsoft Office, however, the reality is quite the opposite. There are hundreds of interdependent feature flags, and in the course of our everyday jobs we have encountered many scenarios where undocumented and untested interactions between feature flags resulted in undesirable behavior. This unfortunate situation lead us to try to build an understanding of which feature flags were intertwined with others. This goal is difficult though, as noted by Meinicke et al. [12] who explain “finding and understanding interactions is nontrivial, especially when features are developed separately and there are no clear specifications.” Moreover the types of interaction among feature flags are complex as well. There are many ways for configuration data to be dependent on each other. Chen et al. [3] define a taxonomy of these dependencies including Control, Default Value, Overwrite Value, and Behavior dependencies. Our investigation focuses only on the Control dependency, where the value of one feature flag determines whether a second feature flag is or is not executed. **Mechanism of Determining Interdependency.** Before studying the properties of interdependent feature flags, we first had to identity the relationships between each of the flags in Microsoft Office. Some systems, such as the one used at Facebook [16] “expresses configuration dependency as source code dependency,” which entirely solves the problem of determining interdependency, however it depends on a specific infrastructure that isn’t available in most systems, including ours. For many more systems, if interdependency relationships are to be established, it must be done by inference, after the code has been written. Medeiros et al. [10] proposed a configuration-space sampling method where they used a combination of sampling algorithms to find configurations that resulted in runtime faults such as memory leaks and uninitialized variables. While they showed this technique to be valuable, it becomes either less accurate or less computationally feasible if configuration space is very large, which is the case with Microsoft Office. A common method to analyse feature flags in the literature is to have humans validate where feature flags exist and what they’re used for. This is likely a symptom of researchers needing to operate over many disparate systems that have heterogeneous feature flagging mechanisms as well as not having the same long-term incentives to automate the discovery process that the maintainer of an individual system might have. One example of manual flag discovery is Meinicke et al. [11], who performed an automated search. through Git commit messages to find repositories which likely contained feature flags, but then used manual inspection to verify the flags existed. A system the size of Microsoft Office is too large for this approach, and instead the relationship between configuration values must be discovered as an automated process. The bulk of research on feature flag or configuration interdependency is done in a static analysis context. For example, Zhang et al. [17] use static analysis to analyze which regions of code are effected by configuration options, and from that determine which configurations depend on each other. Their goal was specifically to find “silent misconfigurations,” configuration values which have no effect on the running program, often due to interactions between configuration settings. Static analysis has many benefits including well-defined correctness guarantees and the ability to find potential future problems before they’re executed. Conversely, it is difficult to have a static analysis system that can seamlessly process unconventional systems such as dynamically generated/loaded code, programs that use multiple languages, and even large projects in a single language that are only able to be built using complex compiler configuration that is difficult to replicate in an external system. Despite not being popular for investigating interdependency, runtime analysis has proven useful in many contexts related to independent configurations. For example, Attariyan and Flinn [1] use dynamic information flow analysis to trace data coming from configuration files to eventual errors as a tool for automated configuration debugging. Given the complexity of the Microsoft Office engineering ecosystem, we opted for the more robust option of dynamic analysis on which to base our investigation. 3 INFERRING RELATIONSHIPS For any two feature flags $A$ and $B$, we want to determine whether the value of $A$ determines if $B$ is queried. In particular, we want to determine if “$A$ causes $B$,” i.e., $A \rightarrow B$, or if the value of $A$ has no effect on whether $B$ is queried, i.e., $A \not\rightarrow B$. For example, the DARK_MODE flag in figure 1 is only queried if the value of the NEW DESIGN flag is true (assuming short-circuiting of logical operators), so $\text{NEW DESIGN} \rightarrow \text{DARK MODE}$. However, whether or not $\text{DARK MODE}$ is queried is independent of the value of the RIPCORD_3456 flag, so $\text{RIPCORD}_3456 \not\rightarrow \text{DARK MODE}$. Sometimes, feature flag relationships are easily inferable from the source code itself. In general, however, the heterogeneous nature of a large codebase makes static analysis difficult, especially for non-local relationships. Feature flags might be spread across different compilation units or be only very indirectly related. In these cases, we have to resort to dynamic analysis of the code’s actual runtime behaviour. Fortunately, it is possible to do this in an entirely passive manner, without changes to the source code. In Microsoft Office, any time a feature flag is queried during the run of an application, the query is logged, together with the current value of the flag. Figure 2(a) presents a simplified example of such query logs. By combining the logs from multiple runs exercising different parts of an application, we can gain broad insight into global feature flag activation patterns. 3.1 Co-Occurrence Discovery If $A \rightarrow B$, then we would expect the timespan $\Delta_{AB} = t_B - t_A$ between any particular query of $A$ (at time $t_A$) and the following query of $B$ (at time $t_B$) to always be roughly the same, for all instances of $A$ and $B$ that occur in the logs. The actual value of $\Delta_{AB}$ will be different for every pair of related feature flags and could range anywhere from a few nanoseconds (e.g., for flags that occur on the same line of code) to even a few seconds (e.g., for flags that are related via some asynchronous operation, like copy-paste). We can view $\Delta_{AB}$ as a relative measure of similarity between the contexts in which flags $A$ and $B$ are evaluated. For example, two flags that are queried in a single expression on the same line of source code have very similar evaluation contexts, and thus a small $\Delta_{AB}$, as will two flags that are located in entirely different source files but connected via a function call; however, two flags that are queried at entirely different points during an application’s run will have a large $\Delta_{AB}$, regardless of whether they are spread far apart in the source code or appear within a few lines of each other. <table> <thead> <tr> <th>Log</th> <th>Time</th> <th>Feature</th> <th>Value</th> </tr> </thead> <tbody> <tr> <td>1</td> <td>14:18:27</td> <td>A</td> <td>False</td> </tr> <tr> <td>1</td> <td>14:18:27</td> <td>C</td> <td>False</td> </tr> <tr> <td>2</td> <td>09:10:38</td> <td>B</td> <td>False</td> </tr> <tr> <td>2</td> <td>09:10:38</td> <td>C</td> <td>False</td> </tr> <tr> <td>3</td> <td>23:53:04</td> <td>A</td> <td>True</td> </tr> <tr> <td>3</td> <td>23:53:04</td> <td>B</td> <td>False</td> </tr> <tr> <td>3</td> <td>23:53:04</td> <td>C</td> <td>False</td> </tr> </tbody> </table> Figure 2: Using query logs (a) to discover co-occurrences (b–d) and infer causalities (e–g) We can collect all co-occurring feature flags by dragging a sliding window of some empirically determined size $\Delta$ over the query logs, selecting all feature flag pairs with $\Delta_{AB} \leq \Delta$. Figures 2b to 2d demonstrate this process (with $\Delta = 1$ s) and show how a graph representation of the discovered co-occurrences is successively built up. In this co-occurrence graph, each vertex represents a feature flag query that returned a particular value ($AF$ meaning flag $A$ with value $False$) and each edge signifies that the two connected queries co-occurred within the same time window $\Delta$. Note that the edges are directed: we take the temporal order of queries into account to avoid adding obviously paradoxical relationships—if $A$ is queried before $B$, then $B \rightarrow A$. Algorithm 1 shows the co-occurrence discovery process in detail. Although the resulting co-occurrence graph already significantly reduces the state space of possible relationships (cf. section 4.1), it of course includes many co-occurrences that are merely coincidental and not actual causal relationships. To discover those, we need to employ causal reasoning. **Algorithm 1: Co-Occurrence Discovery** **Input:** set of feature query log files $L$; time window size $\Delta$ **Output:** co-occurrence graph $G = (V, E)$ let $G = (V, E)$ be an empty directed graph; for each log file $L$ do for each sliding time window $W$ of size $\Delta$ in $L$ do for each feature query $q$ in $W$ do if $q \in V$ then increase the count of $q$ in $V$ by 1; else add $q$ to $V$ with an initial count of 1; for each 2-combination $(q_1, q_2)$ in $W$ do if $(q_1, q_2) \in E$ then increase the count of $(q_1, q_2)$ in $E$ by 1; else add $(q_1, q_2)$ to $E$ with an initial count of 1; 3.2 Naive Causal Reasoning To turn a co-occurrence graph into a causal graph, whose vertices represent single feature flags and whose directed edges indicate causal parent-child relationships, we must look at the values of prospective parent flags. The main intuition is that if $B$ is queried regardless of the value of $A$, then $A \rightarrow B$. To illustrate this, figures 2e to 2g proceed with the running example and successively eliminate non-causal edges from the co-occurrence graph. First, the edges $AT \rightarrow CF$ and $AP \rightarrow CF$ are removed (figure 2e), because if both $AT$ and $AP$ co-occur with $CF$, then neither can actually be a causal factor for $C$; the value of $A$ is clearly immaterial to whether or not $C$ is queried. Next, $BF \rightarrow CF$ is removed (figure 2f), because even though we do not see $BF \rightarrow CF$, we also do not have any knowledge of $BT \rightarrow CF$, as $BF$ does not occur at all. Merely knowing of a co-occurrence ($BF \rightarrow CF$) is not enough evidence for us to assume a causal relationship ($B \rightarrow C$), we also require evidence of the absence of counter-evidence ($BF \rightarrow CF$). Put another way: in order to determine that some feature flag is the parent of another, we need to see both the cases where the flag is (or could be) the parent, and the cases where it is not. It is only by contrasting these two scenarios that we can gain information. Finally, only $AT \rightarrow BF$ remains (figure 2g) and thus the causal graph is simply $A \rightarrow B$. 3.3 Noise Figure 3 shows a typical instance of a real-world co-occurrence graph. Naive causal reasoning would require us to eliminate all of its edges, because they clearly contradict one another. But not all of the co-occurrences in this graph are equally valid; some of them are purely noise, which can appear for a number of reasons: **Bugs** Logging feature flag queries happens in a variety of heterogeneous environments and involves local caching, asynchronous batched network transmissions, and server-side log processing. Bugs can and do happen: queries get dropped or logged in duplicate, time-ordering gets mixed up, and so on. While we could work under the assumption that bugs are relatively rare and could be mitigated by rigorously cleaning our input data, we would much prefer to be able to draw valid conclusions from data that occasionally includes small, inexplicable amounts of noise. Such is the nature of industrial software engineering. **Crossed Signals** Our logs contain feature flag queries across a variety of apps on a variety of platforms. Some of these share the same feature flags but use them in different ways, exhibiting different interdependencies. It certainly makes sense to process some subsets of our logs separately, e.g., partitioned by platform. On the other hand, since apps do communicate with each other and there are legitimate feature relationships that cross app boundaries, we would also like to capture those. **Code Drift** As the source code changes over time, and feature flags are added and removed, the relationships between feature flags change as well. The query logs are like a slow moving window sliding over the released app versions, capturing multiple versions at once and slightly lagging behind the latest changes in the source code, but steadily catching up. As most relationships between feature flags remain relatively stable, however, limiting ourselves to only logs from the very latest (released or unreleased) app versions would severely limit the amount of data available for analysis. **Coincences** Sometimes the data just lines up in a way that is indistinguishable from a real signal. In principle, we will never be able to entirely rule out this kind of noise. In practice, we would like our analysis method to be sensitive enough to discard many, if not most, such coincidences. While some sources of noise can be mitigated, we would like to deal with most data as-is. How can we infer causal relationships in the presence of noise? And how can we be confident that our inferences are correct, given that one small change in signal could completely change the result? Assuming that we know that $A$ is the outcome of the event. The probability that some feature flag $B$ is queried is $P(B|A)$. We observe that $A$ is queried if $A \implies B$, and thus the value of $A$ has the value 1 if $A \implies B$ is true, and 0 if $A \implies B$ is false. We observe that the likelihood of $A$ being queried is $P(A)$, assuming we know that $A$ is the outcome of the event. The inverse—"How likely is it that $A$ was queried if $A$ had the value $x$?"—is given by $P(A|x) = P(A|X) = P(A \cap X) / P(X)$. If $A$ has $k$ possible (observed) values, then there are $2k$ such probabilities. The inverse—"How likely is it that $A$ was queried if $A$ had the value $x$?"—is given by $P(A|x) = P(A|X) = P(A \cap X) / P(X)$. If $A$ is assumed to be a boolean flag and this is the only occurrence of both $A$ and $B$ in the source code. Clearly, the likelihood that $B$ will be queried if $A$ is true is 100%, while the likelihood that $B$ will be queried if $A$ is false is 0%. Similarly, the likelihood that $A$ was true if $B$ is queried is $100\%$, and the likelihood that $A$ was false if $B$ was queried is 0%. We observe $P(B|A) = 1$, $P(B|A) = 0$, $P(A|B) = 1$, and $P(A|B) = 0$. Realistically, $A$ or $B$ might occur multiple times in the source code, possibly in relation with other feature flags: ``` if (A) {X} if (A && X) {B} if (X || A) {B} ``` The probabilities between $A$ and $B$ will then be affected by some values proportional to the number of additional children of $A$ and additional parents of $B$. In particular, we now have $P(B|A) = 1 - \alpha$, where $\alpha$ is some term proportional to the number of additional children of $A$, and $P(A|B) = 1 - \beta$, where $\beta$ is some term proportional to the number of additional parents of $B$. The table below gives the expected probabilities for the three possible scenarios: $A \implies B$, $A \implies A$, and $A \implies B$. <table> <thead> <tr> <th>Scenario</th> <th>$A \implies B$</th> <th>$A \implies B$</th> <th>$A \implies B$</th> </tr> </thead> <tbody> <tr> <td>$P(B</td> <td>A)$</td> <td>$1 - \alpha$</td> <td>$0$</td> </tr> <tr> <td>$P(B</td> <td>A)$</td> <td>$0$</td> <td>$1 - \alpha$</td> </tr> <tr> <td>$P(A</td> <td>B)$</td> <td>$1 - \beta$</td> <td>$0$</td> </tr> <tr> <td>$P(A</td> <td>B)$</td> <td>$0$</td> <td>$1 - \beta$</td> </tr> </tbody> </table> In the case of $A \implies B$, the probabilities are unknown random values $\epsilon_1$ to $\epsilon_4$, about which we know nothing, except that they are very unlikely to match the probabilities we expect in the other two cases. The exact values of $\alpha$ and $\beta$ are also unknown, and they are different for each particular combination of feature flags $A$, $B$, and $X$, but it is reasonable to assume that for most feature flags the number of parents and children will be much closer to one than, for example, ten. Both $\alpha$ and $\beta$ are thus expected to be significantly smaller than one on average. Knowing which probabilities to expect for $A \implies B$ and $A \implies B$, we can calculate two error values $E_T$ and $E_F$, indicating how much reality deviates from the expectations for each scenario. The smaller the error, the more likely the scenario; if both errors are too large, then $A \implies B$. Figure 3 demonstrates these calculations on a noisy graph based on real data. In the remainder of this section, we formalize this idea and generalize it to non-boolean flags. **Probabilistic Causal Inference.** Assume that $A$ and $B$ are feature flags, with $A$ having $k$ observed values, and that $A$ occurs before $B$. As a shorthand, we will write $A_i$ for the total number of occurrences of $A$ that return value $i$, $B$ for the total number of occurrences of $B$ (returning any value), and $A_iB_j$ for the number of co-occurrences of $A_i$ and $B_j$. For each of the $k$ possible scenarios $A_i \implies B$, we can compute an error value $$E_i = \frac{1}{k + 2} \left( \frac{1 - A_iB_j}{A_i} + \sum_{j \neq i} A_jB_j + \frac{1 - A_iB_j}{B} \right).$$ The overall error for the possibility $A \implies B$ is then given by $$E = \min_i E_i.$$ Because \( E \) only captures the relative proportions between \( A \) and \( B \), we assess our confidence in \( E \) by computing the least absolute number of contributing observations \[ N = \min(A_1, \ldots, A_k, B). \] Then, for empirically determined thresholds \( \hat{E} \) and \( \hat{N} \), \[ A \rightarrow B \quad \text{if} \quad k \geq 2 \quad \text{and} \quad E \leq \hat{E} \quad \text{and} \quad N \geq \hat{N}, \] \[ A \rightarrow B \quad \text{otherwise}. \] We are thus able to infer interdependence between feature flags based on observed (co-)occurrences. **RQ1. How can we infer feature flag interdependencies at scale?** Looking solely at query logs, we are able to discover feature flags that repeatedly co-occur within certain time windows. Based on intuitions about code structure and employing notions from probability theory, we developed a method of probabilistic causal reasoning that is robust to noise by calculating how closely a pair of co-occurring feature flags matches an ideal causal relationship. ### 4 EVALUATION We implemented our inference mechanism in Python and applied it to real-world feature flag query logs from Microsoft Office. We chose a sub-sample of query logs restricted to a single release platform and code fork, which made it easier to cross-reference potential findings with the codebase. For a period of one week, we collected about 2.5 million feature queries per day, from about 80,000 daily app sessions. We performed co-occurrence discovery every day, with a time window size \( \Delta = 1 \mu s \), incrementally updating our database of co-occurrences and re-calculating all causal probabilities afterwards. At the end of the collection period, we had discovered 5,946,317 pairs of 12,791 co-occurring feature flags. Of these, 326,418 pairs of 3724 feature flags are potentially causally related \((E \leq 0.50)\) and 593 pairs of 612 feature flags were considered to be likely causally related \((E \leq 0.25)\). Figure 5 presents some concrete examples of found relationships. #### 4.1 Precision To evaluate the precision of our approach—how many of the relationships we uncover are actually true causal relationships?—we cross-checked the results of our inference algorithm with the Microsoft Office source code. We selected 200 pairs of 327 feature flags in a purposive sample covering the range of \( E \) and \( N \) values returned by our algorithm. The sample is balanced, with 107 of the sample pairs exhibiting a real causal relationship in the codebase, and 93 of no discernible causality. We manually inspected the source code locations of each selected feature flag pair to determine causality. This was a time-consuming process, as it is often not immediately apparent whether a causal relationship exists, especially for relationships that would be rather indirect. We erred on the side of caution, and only reported true positives when the causal relationship was clear beyond doubt; if the examiner was not able to establish a causal relationship after some time (typically about 15 minutes), the feature flag pair in question was marked as a false positive—thus it is possible that the number of true positives is actually higher than what we report. Figure 4 shows the precision (true positives divided by sample) plotted against \( \hat{E} \), for different choices of \( \hat{N} \). We are able to achieve 100% precision with \( \hat{E} = 0.05 \) (regardless of \( \hat{N} \)), and 90% precision with \( \hat{E} = 0.25 \) and \( \hat{N} = 100 \). The exact numbers of manually verified (true positive) and falsified (false positive) pairs are given in Table 1, which also shows how many pairs of feature flags we are able to discover at different levels of \( \hat{E} \). Choosing \( \hat{E} = 0.50 \) and \( \hat{N} = 100 \), i.e., classifying rather unlikely pairs to be related, we still achieve a precision of 66%—significantly better than chance. This makes sense, because the co-occurrence discovery step already reduces the set of possible relationships in a major way, filtering out those pairs of feature flags which are definitely not related. Adding probabilistic causal reasoning on top, i.e., only counting pairs with a \( E \leq 0.50 \), naturally increases precision further. ![Figure 4: Precision for different values of \( \hat{E} \) and \( \hat{N} \). As our willingness to accept unlikely candidates increases, so do the rates of false positive parent-child relationships.](image) <table> <thead> <tr> <th>( \hat{E} )</th> <th>Discovered</th> <th>Verified</th> <th>Falsified</th> </tr> </thead> <tbody> <tr> <td>( E )</td> <td>Pairs</td> <td>Flags</td> <td>Pairs</td> </tr> <tr> <td>0.01</td> <td>16</td> <td>31</td> <td>7</td> </tr> <tr> <td>0.05</td> <td>98</td> <td>167</td> <td>28</td> </tr> <tr> <td>0.10</td> <td>149</td> <td>231</td> <td>41</td> </tr> <tr> <td>0.15</td> <td>214</td> <td>296</td> <td>50</td> </tr> <tr> <td>0.20</td> <td>305</td> <td>372</td> <td>53</td> </tr> <tr> <td>0.25</td> <td>593</td> <td>612</td> <td>56</td> </tr> <tr> <td>0.30</td> <td>941</td> <td>901</td> <td>56</td> </tr> <tr> <td>0.35</td> <td>2358</td> <td>1791</td> <td>57</td> </tr> <tr> <td>0.40</td> <td>7130</td> <td>3012</td> <td>58</td> </tr> <tr> <td>0.45</td> <td>10,247</td> <td>3430</td> <td>58</td> </tr> <tr> <td>0.50</td> <td>326,418</td> <td>5724</td> <td>59</td> </tr> </tbody> </table> AugLoopRuntime.cpp ```cpp bool FSimilarityEnabled() { static const FeatureFlag EduEnabled {...}; static const FeatureFlag ConEnabled {...}; static const FeatureFlat EntEnabled {...}; return (EduEnabled || ConEnabled || EntEnabled); } ``` (a) Triangular relationship EntityManager.cpp ```cpp void EntityManager::Init() { if (FeatureFlags::Instance(m_pWorkbook).AutoRefresh()) { RefreshManager::CreateSharedInstance(m_pWorkbook); } } ``` RefreshManagerImpl.cpp ```cpp void RefreshManagerImpl::CreateSharedInstance(Workbook* pWorkbook) { try { refreshManager = GetApi<RefreshManager>(NEWSHAREDObj( RefreshManagerImpl, pWorkbook)); } CATCH_HANDLER } ``` (b) Indirect relationship across multiple files Word.xml ```xml <FSDropGallery Id="flyoutInsertPics" FeatureFlag="PictureRibbon"> <Commands> <FSMenuCategory Class="StandardItems"> <Items> <FSExecuteAction Id="insertPicFromFile" /> <FSExecuteAction Id="insertOnlinePic" FeatureFlag="OnlinePics" /> <FSExecuteAction Id="clipArtDialog" /> </Items> </FSMenuCategory> </Commands> </FSDropGallery> ``` (c) Relationship in resource file Figure 5: Real causal relationships between feature flags found in the Microsoft Office codebase. The source code has been simplified for presentational purposes. 4.2 Recall Since our goal is to find relationships between feature flags that are as-of-yet unknown, we do not have a priori ground truth. This makes it difficult to establish recall—how many of the relationships between feature flags that are hidden in the codebase can our method uncover? We are unable to answer this question directly. However, we can make inferences based on the quality of our results; in particular, the types of relationships we are seeing. Figure 5a is an example of an “obvious” relationship: three feature flags are queried together as part of a boolean predicate, giving rise to a triangular interdependency; only if the EduEnabled flag is false will the ConEnabled flag be queried, and only if both EduEnabled and ConEnabled are false will EntEnabled be queried. This relationship is manifested entirely in a single line of source code, producing a strong signal in the query logs that our system can easily detect. Figure 5b shows a much more indirect relationship, spanning multiple source files. Here, the parent (AutoRefresh) and child (ShowRefreshBar) flags are queried in different program modules and are separated in the control flow by a number of function calls involving macro expansions, class constructors, and C++ templates. A purely static approach might have some difficulties with this, but our log-based analysis naturally captures the dynamic control flow; the surrounding syntactic complexity is entirely irrelevant. Figure 5c demonstrates that our approach is also completely language-agnostic. In addition to flag usage in C++, C#, and other programming languages, we are able to find dependencies between feature flags used solely in non-code resource files, as in the present case of the PictureRibbon → OnlinePics pair found in an XML configuration file used to construct an application UI. Given the diversity of relationship types we are able to find (see also section 5), including very indirect relationships, we believe that our results are indicative of non-trivial recall. RQ2. What is the accuracy of our method in a real-world setting? To determine the precision of our approach, we manually evaluated a subset of discovered relationships in a large-scale real-world codebase and found that we are able to achieve 90% precision for likely pairs (E ≤ 0.25), with an absolute minimum precision of 66%. While we are unable to precisely quantify recall due to a lack of ground truth, we see evidence of non-trivial recall in the indirect nature of some of the discovered relationships, which can span multiple files and programming languages. 5 INTERDEPENDENCY PATTERNS So far, we have discussed feature flag relationships mostly as pairwise parent-child relationships between two flags. As figure 5a demonstrates, more complex patterns can emerge once transitive dependencies are taken into account. Each of the two flags in a parent-child relationship can themselves be in further parent-child relationships with other flags (which is reflected in the values of α and β in section 3.4). To investigate the extent of such transitive interdependencies and whether or not they give rise to re-occurring patterns, we can study the global causal graph of feature flags, as seen in figure 6. Here, we plotted the 612 feature flags from our evaluation (section 4) that were considered to be likely causally related (E ≤ 0.25). Nodes correspond to feature flags and the directed edges represent parent-child relationships. The weakly connected components of this graph are feature clusters, i.e., subsets of feature flags that are only (indirectly) connected to each other but not to flags from any other subset. The layout was achieved using the Fruchterman-Reingold algorithm [6], which naturally brings out independent clusters. The location and distance of nodes hold no further meaning. Based on visual inspection of this graph, we identified five basic patterns of feature flag interdependencies. The identified patterns, the rules used to determine if a feature flag cluster belongs to a specific pattern, as well as examples of code structures that could give rise to each pattern, are given in table 2. The most common pattern is the simple pair of parent-child flags, occurring 79 times in our sample and involving 158 flags (25.8% of all flags in the sample). The second most common is the outward star pattern, involving 122 flags (19.9%), where one parent flag is at the center of numerous parent-child relationships, but the children are themselves not interconnected. This situation arises when a single flag guards a large section of code containing many independent flags, or when a (often non-boolean) feature flag acts as a configuration parameter that is repeatedly used in scenarios involving other flags. Less common, involving only 15 flags (2.5%), is the inward star, where a child flag has multiple parent flags, which can occur when the child flag is reused in different code contexts. Triangle and chain patterns each only occur 4 times in our sample, Table 2: Identified patterns of feature flag interdependencies <table> <thead> <tr> <th>Pattern</th> <th>Description</th> <th>Code Example</th> <th>Occurrence</th> <th>Involved Flags</th> </tr> </thead> <tbody> <tr> <td>Chain</td> <td>At least three nodes that are in consecutive parent-child relationships.</td> <td>(\text{if} (A) {B}) (\cdots) (\text{if} (B) {C})</td> <td>4 (2.7%)</td> <td>12 (2%)</td> </tr> <tr> <td>Triangle</td> <td>At least three nodes in a chain, with the first node also being the parent of the last node.</td> <td>((A &amp;&amp; B &amp;&amp; C))</td> <td>4 (2.7%)</td> <td>12 (2%)</td> </tr> <tr> <td>Inward Star</td> <td>One node is the child of at least two parents, which are not themselves connected.</td> <td>(\text{if} (A) {C}) (\text{if} (B) {C})</td> <td>5 (3.4%)</td> <td>15 (2.5%)</td> </tr> <tr> <td>Outward Star</td> <td>One node is the parent of at least two children, which are not themselves connected.</td> <td>(f(A,B)) (g(A,C))</td> <td>35 (24%)</td> <td>122 (19.9%)</td> </tr> <tr> <td>Simple Pair</td> <td>Two nodes that are in a parent-child relationship.</td> <td>(\text{if} (A) {B})</td> <td>79 (54.1%)</td> <td>158 (25.8%)</td> </tr> <tr> <td>Other</td> <td>Unclassifiable; often basic patterns with slight deviations, or superclusters of multiple patterns.</td> <td></td> <td>19 (13%)</td> <td>293 (47.9%)</td> </tr> </tbody> </table> and are closely related: triangle formations are usually due to short-circuiting boolean predicates or closely nested if statements, while chains arise either when consecutive parent-child relationships are not nested but purely sequential, or when the relationships are very indirect, with enough distance between parent and grandchild to not be recognized as a triangle. In addition to these basic patterns, a number of clusters remained unclassifiable (19 out of 146, involving 293 flags in total). Of these, many are essentially one of the basic patterns with slight deviations preventing easy classification. For example, one large cluster involving 102 flags (the “starburst” in the lower center of figure 6) is almost a pure outward star pattern, save for a few interconnected children. Other unclassifiable patterns arise when two or more basic patterns are connected by a bridge node, forming a singular supercluster. Bridge nodes could indicate two otherwise unrelated application components that are linked by a common feature flag, increasing software coupling and perhaps introducing a hidden interdependency. Inability to assign one of the basic classifications may well be an indicator of unusual complexity and therefore risk. **RQ3. Do re-occurring patterns of feature flag relationships exist?** We found five re-occurring patterns of feature flag interdependency relationships: simple pairs, outward stars, inward stars, triangles, and chains. Other types of feature flag clusters are often deviations from these basic patterns. We can use interdependency patterns to identify unusual or risky code structures. **6 THREATS TO VALIDITY** While our work is based on real world data of a large-scale and mature software system, there are threats to the generalizability of our approach. **Idealized Assumptions.** If the relationships between feature flags are actually significantly different than the platonic ideal \(\text{if} (A) \{B\}\), or the average number of children and parents per feature flag (reflected in the values of \(\alpha\) and \(\beta\) much higher than we assume, then our probabilistic method might have a hard time inferring relationships. However, we based our assumptions on our direct experience with actual code containing feature flags and empirical evaluation confirms the effectiveness of our approach. **Lack of Ground Truth.** We have mentioned the difficulty of establishing recall, as we lack ground truth. It is possible that our approach, while able to find some relationships, is still missing a significant number. But based on our findings, which do include non-trivial indirect relationships, we are confident of achieving reasonable recall. The parameters \((\Delta, \hat{\Delta}, \hat{\Delta}, \hat{\Delta})\), which influence recall, need to be chosen empirically and we believe we made reasonable choices for the purposes of this paper; we have limited evidence that by increasing \(\Delta\) we can further improve recall (see section 7). **Cold Start Problem.** Our approach is fundamentally data-driven: in order to make inferences about possible relationships between feature flags, the data needs to contain evidence of these relationships, in the form of sequential feature queries; to generate these feature queries, the applications need to run with certain combinations of feature flags enabled; without knowing the relationships between feature flags beforehand, we would need to test all possible combinations of flags, with all possible values, in order to generate the data necessary to make complete inferences—this is computationally infeasible. In reality, for our applications, we do not actually need to have perfect recall. Being able to infer a significant amount of interesting relationships is enough to make the system useful. Furthermore, preliminary inference results can be used to selectively generate missing data, enabling more inferences and improving recall (see section 7). We performed this research in response to several practical problems we regularly face in our organization. One of the most valuable outcomes of this work is the diversity of tangible benefits we can receive by applying our findings. These issues span the entire lifecycle of our product, from automated testing to client-side error mitigation. Further, the challenges we hope to address have impacts that range from increased organizational efficiency to simplified development practices. **Codebase Bias.** If the inference mechanism is too closely tailored to the particularities of a single codebase (i.e., that of Microsoft Office) and the uses of feature flags therein, then it might not be transferable to other applications. However, we believe that the foundations of our approach are entirely application-agnostic and that it is sufficiently general to be applicable to other codebases. Moreover, Microsoft Office itself consists of a heterogeneous set of applications, with massive differences between their individual core components. **7 FUTURE WORK** In the future, we aim to improve both precision and recall by completing our dataset and investigating larger time windows; and we want to further explore patterns of interdependencies among feature flag clusters. **Completing the Dataset.** The probabilistic causal discovery approach works best with complete data, i.e., a dataset in which both boolean feature flag values are present. As the dataset in practice is oftentimes incomplete, i.e., only one feature flag value is present as opposed to both, we plan to systematically run an automated test suite [7] on Microsoft Office applications with different sets of feature flag values. The output that is logged by the simulator in our test suite is exactly the same as when real users would use a Microsoft Office application. **Investigating Larger Time Windows.** We plan to evaluate our approach using larger co-occurrence time windows (Δ), which could allow us to capture feature flag pairs that are being queried further apart. We hypothesize that more nested feature flags might be discovered in features that take longer to fully execute due to user interactions, e.g., copying and pasting. **Exploring More Interdependency Patterns.** Feature flag pattern recognition could be improved by tolerating slight deviations from existing patterns and by recognizing more complex combinations, identifying bridge nodes and superclusters. We also want to better understand what code structures give rise to which interdependency patterns, and how such patterns are linked to faults. **8 APPLICATIONS** We performed this research in response to several practical problems we regularly face in our organization. One of the most valuable outcomes of this work is the diversity of tangible benefits we can receive by applying our findings. These issues span the entire lifecycle of our product, from automated testing to client-side error mitigation. Further, the challenges we hope to address have impacts that range from increased organizational efficiency to simplified development practices. **Targeted Testing.** Testing all possible combinations of feature flag values becomes substantially harder as more feature flags are used. The combinatorial explosion that occurs when using many feature flags makes it impossible to test all combinations. Fowler [5] recommends to test the feature flags that are known to be enabled in the next release. However, large projects can contain thousands of feature flags where every flag can be toggled. Therefore, it is important to enable tooling that helps to systematically test only the relevant combinations. Our research on feature flag co-occurrences can be applied to substantially decrease the number of feature flag value combinations to test, as only the co-occuring feature flags’ combinations need to be targeted for combinatorial testing. Flags that are not co-occurring can be tested independently of each other. Conversely, flags which are discovered to be involved in complex relationships can be highlighted for additional scrutiny. **Deployment Velocity.** We plan to use the knowledge of feature flag dependencies to determine the velocity with which a flag can be rolled out. Feature flags, by their design, indicate the usage of unique modules of code. Interdependent features then indicate interdependent modules, which is the main factor in coupled code. It is well studied that software coupling is correlated with negative quality indicators, such as vulnerabilities [4]. Consequently, we extrapolate that interdependent flags are more at risk of admitting vulnerabilities. We can use this information to roll out changes slower to ensure that they’re thoroughly understood and tested before being fully deployed. **Diagnostics.** Failures rooted in feature flags can be tedious and time-consuming to diagnose. Troubleshooting failures when multiple feature flags are involved can incur substantial costs [1]. Showing explicitly which feature flags are interdependent has the potential to decrease the time to mitigate the problem, and it might uncover previously unknown relationships as the cause of failure. **Error Mitigation.** Many features are developed behind feature flags, such that the flag can be toggled in case of a failure [14]. The typical response to discovering an error behind a feature flag is to mitigate the error by immediately disabling the flag. In the case of interdependent flags, however, this can have unintended side effects. It could disable more features than intended, or leave the system configuration in an unexpected and untested state. Our research can enable developers to check if there are any dependencies before toggling a feature flag, which can help to prevent a further regression. **9 CONCLUSION** In this paper, we described an approach for automatically discovering interdependencies between feature flags in order to aid product teams in improving their system’s reliability. Unknown dependencies between feature flags can be a source of serious bugs but testing all possible flag combinations is infeasible for large projects. Our approach is based solely on analyzing feature flag query logs and is especially suited for large heterogeneous codebases. We developed a method of probabilistic causal reasoning that is language-agnostic and robust against noise. We applied our approach on the Microsoft Office codebase and achieved high precision and non-trivial recall. In analysing the results, we found patterns of feature flag relationships that can be indicators for the amount of risk associated with certain flags. Our work can be applied in reducing the test burden for combinatorial testing, in determining deployment velocity for safe rollouts, in diagnostics of faults involving feature flags, and in error mitigation by preventing regressions. In the future, we will use automated testing to increase and improve the data available for analysis and we plan to experiment with different time windows to discover a wider range of possible relationships. REFERENCES
{"Source-Url": "https://repositum.tuwien.at/bitstream/20.500.12708/150331/1/Schroeder-2022-Discovering%20Feature%20Flag%20Interdependencies%20in%20Microsoft%20Of...-vor.pdf", "len_cl100k_base": 11336, "olmocr-version": "0.1.53", "pdf-total-pages": 11, "total-fallback-pages": 0, "total-input-tokens": 40410, "total-output-tokens": 13700, "length": "2e13", "weborganizer": {"__label__adult": 0.0002727508544921875, "__label__art_design": 0.00025773048400878906, "__label__crime_law": 0.0002582073211669922, "__label__education_jobs": 0.0008072853088378906, "__label__entertainment": 4.982948303222656e-05, "__label__fashion_beauty": 0.00012362003326416016, "__label__finance_business": 0.00016891956329345703, "__label__food_dining": 0.00020945072174072263, "__label__games": 0.00044608116149902344, "__label__hardware": 0.0006518363952636719, "__label__health": 0.0002968311309814453, "__label__history": 0.00016629695892333984, "__label__home_hobbies": 7.855892181396484e-05, "__label__industrial": 0.00022172927856445312, "__label__literature": 0.00023066997528076172, "__label__politics": 0.00015556812286376953, "__label__religion": 0.00026702880859375, "__label__science_tech": 0.0108642578125, "__label__social_life": 8.529424667358398e-05, "__label__software": 0.007495880126953125, "__label__software_dev": 0.97607421875, "__label__sports_fitness": 0.00017595291137695312, "__label__transportation": 0.0003097057342529297, "__label__travel": 0.00013637542724609375}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 54809, 0.03432]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 54809, 0.2942]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 54809, 0.89793]], "google_gemma-3-12b-it_contains_pii": [[0, 3473, false], [3473, 10504, null], [10504, 15537, null], [15537, 21545, null], [21545, 25635, null], [25635, 31178, null], [31178, 32588, null], [32588, 37630, null], [37630, 42954, null], [42954, 50067, null], [50067, 54809, null]], "google_gemma-3-12b-it_is_public_document": [[0, 3473, true], [3473, 10504, null], [10504, 15537, null], [15537, 21545, null], [21545, 25635, null], [25635, 31178, null], [31178, 32588, null], [32588, 37630, null], [37630, 42954, null], [42954, 50067, null], [50067, 54809, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 54809, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 54809, null]], "pdf_page_numbers": [[0, 3473, 1], [3473, 10504, 2], [10504, 15537, 3], [15537, 21545, 4], [21545, 25635, 5], [25635, 31178, 6], [31178, 32588, 7], [32588, 37630, 8], [37630, 42954, 9], [42954, 50067, 10], [50067, 54809, 11]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 54809, 0.14567]]}
olmocr_science_pdfs
2024-12-08
2024-12-08
14702e61cb3deb4f27321e867e388daa184d0946
Ascend by Evolv: AI-Based Massively Multivariate Conversion Rate Optimization Risto Miikkulainen\textsuperscript{2,3}, Myles Brundage\textsuperscript{1}, Jonathan Epstein\textsuperscript{1}, Tyler Foster\textsuperscript{1}, Babak Hodjat\textsuperscript{2}, Neil Iscoe, Jingbo Jiang, Diego Legrand, Sam Nazari\textsuperscript{1}, Xin Qiu\textsuperscript{2}, Michael Scharff\textsuperscript{3}, Cory Schoolland\textsuperscript{1}, Robert Severn\textsuperscript{1}, Aaron Shagrin\textsuperscript{1} risto.hodjat,xin.qiu@cognizant.com; myles.brundage,jonathan.epstein,tyler.foster,sam.nazari,michael.scharff, cory.schoolland,robert.severn,aaron.shagrin@evolv.ai; niscoe,yamnoviola,legrand.diego@gmail.com \textsuperscript{1}Evolv Technologies, \textsuperscript{2}Cognizant Technology Solutions, \textsuperscript{3}The University of Texas at Austin Abstract Conversion rate optimization (CRO) means designing an e-commerce web interface so that as many users as possible take a desired action such as registering for an account, requesting a contact, or making a purchase. Such design is usually done by hand, evaluating one change at a time through A/B testing, evaluating all combinations of two or three variables through multivariate testing, or evaluating multiple variables independently. Traditional CRO is thus limited to a small fraction of the design space, and often misses important interactions between the design variables. This paper describes Ascend by Evolv, an automatic CRO system that uses evolutionary search to discover effective web interfaces given a human-designed search space. Design candidates are evaluated in parallel on line with real users, making it possible to discover and utilize interactions between the design elements that are difficult to identify otherwise. A commercial product since September 2016, Ascend has been applied to numerous web interfaces across industries and search space sizes, with up to four-fold improvements over human design. Ascend can therefore be seen as massively multivariate CRO made possible by AI. Introduction In e-commerce, designing web interfaces (i.e. web pages and interactions) that convert as many users as possible from casual browsers to paying customers is an important goal (Ash et al. 2012; Salehd and Shukairy 2011). While there are some well-known design principles, including simplicity and consistency, there are often also unexpected interactions between elements of the page that determine how well it converts. The same element, such as a headline, image, or testimonial, may work well in one context but not in others—it is often hard to predict the result, and even harder to decide how to improve a given page. An entire subfield of information technology has emerged in this area, called conversion rate optimization, or conversion science. The standard method is A/B testing, i.e. designing two different versions of the same page, showing them to different users, and collecting statistics on how well they each convert (Kohavi and Longbotham 2016). This process allows incorporating human knowledge about the domain and conversion optimization into the design, and then testing their effect. After observing the results, new designs can be compared and gradually improved. The A/B testing process is difficult and time-consuming: Only a very small fraction of page designs can be tested in this way, and subtle interactions in the design are likely to go unnoticed and unutilized. An alternative to A/B is multivariate testing, where all value combinations of a few elements are tested at once. While this process captures interactions between these elements, only a very small number of elements is usually included (e.g. 2-3); the rest of the design space remains unexplored. The Taguchi method (Kohavi and Thomke 2017; Rao et al. 2008) is a practical implementation of multivariate testing. It avoids the computational complexity of full multivariate testing by evaluating only orthogonal combinations of element values. Taguchi is the current state of the art in this area, included in commercial applications such as the Adobe Target (Adobe 2018). However, it assumes that the effect of each element is independent of the others, which is unlikely to be true in web interface design. It may therefore miss interactions that have a significant effect on conversion rate. This paper describes an AI-assisted technology for conversion optimization based on evolutionary computation. This technology is implemented in Ascend, a conversion optimization product by Evolv Technologies (and formerly by Sentient Technologies), deployed in numerous e-commerce websites of paying customers since September 2016 (Sentient Technologies 2017). Ascend uses a customer-designed search space as a starting point. It consists of a list of elements on the web page that can be changed, and their possible alternative values, such as a header text, font, and color, background image, testimonial text, and content order. Ascend then automatically generates web-page candidates to be tested, and improves those candidates through evolutionary optimization. Because e-commerce sites often have high volume of traffic, fitness evaluations can be done live with a large number of real users in parallel. The evolutionary pro- cess in Ascend can thus be seen as a massively paral- lel version of interactive evolution, making it possible to optimize web designs in a few weeks. Intelligent traf- fic allocation through multi-armed bandit methods can be used to identify best candidates reliably, and also to optimize overall performance over limited-duration cam- paigns. From the application point of view, Ascend is a novel method for massively multivariate optimization of web-page designs. Depending on the application, im- provements of 20-200% over human design are routine using this approach (Sentient Technologies 2017). These results are reliable across industries and search-space sizes. This paper describes the technology underlying As- cend, presents an example use case, an empirical com- parison to the Taguchi method, and an extension to im- proved traffic allocation using multi-armed bandit meth- ods, summarizes the product status, and outlines future opportunities for evolutionary computation in optimizing e-commerce. Background With the explosive growth of e-commerce in recent years, entirely new areas of study have emerged. One of the main ones is conversion rate optimization, i.e. the study of how web interfaces should be designed so that they are as effective as possible in converting users from casual browsers to actual customers. Conversion means taking a desired action on the web interface such as making a purchase, registering for a marketing list, or clicking on other desired links in an email, website, or desktop, mobile, or social media application (Ash et al. 2012; Salehd and Shukairy 2011). Conversions are usu- ally measured in number of clicks, but also in metrics such as resulting revenue or time spent on the site and rate of return to the site. Conversions are currently optimized in a labor- tensive manual process that requires significant expert- ise. The web design expert or marketer first creates de- signs that s/he believes to be effective. These designs are then tested in an A/B testing process, by directing user traffic to them, and measuring how well they convert. If the conversion rates are statistically significantly differ- ent, the better design is adopted. This design can then be improved further, using domain expertise to change it, in another few rounds of creation and testing. Conversion optimization is a fast-emerging compo- nent of e-commerce. In 2016, companies spent over $72 billion to drive customers to their websites (eMar- keter 2016). Much of that investment does not result in sales: conversion rates are typically 2-4% (i.e. 2-4% of the users that come to the site convert within 30 days). In 2014, only 18% of the top 10,000 e-commerce sites did any conversion optimization; in January 2017, 30% of them did so (Builtwith 2017). The growth is largely due to available conversion optimization tools, such as Opti- mizely, Visual Website Optimizer, Mixpanel, and Adobe Target (Builtwith 2017). These tools make it possible to configure the designs easily, allocate users to them, record the results, and measure significance. This process has several limitations. First, while the tools make the task of designing effective web interfaces easier, the design is still done by human experts. The tools thus provide support for confirming the experts’ ideas, not helping them explore and discover novel de- signs. Second, since each step in the process requires statistical significance, only a few designs can be tested. Third, each improvement step amounts to one step in hillclimbing; such a process can get stuck in local max- ima. Fourth, the process is aimed at reducing false posi- tives and therefore increases false negatives, i.e. designs with good ideas may be overlooked. Fifth, while the tools provide support for multivariate testing, in practice only a few combinations can be tested (e.g. five possi- ble values for two elements, or three possible values for three elements)—or, when using the Taguchi method, the variables are assumed to have independent effects. As a result, it is difficult to discover and utilize interactions between design elements. Evolutionary optimization is well suited to address these limitations. Evolution is an efficient method for exploration; only weak statistical evidence is needed for progress; its stochastic nature avoids getting stuck in local maxima; good ideas will gradually become more prevalent. Most importantly, evolution searches for ef- fective interactions. For instance, Ascend may find that the button needs to be green, but only when it is transpar- ent, and the header is in small font, and the header text is aligned. Such interactions are very difficult to find us- ing A/B testing, requiring human insight into the results. Evolution makes this discovery process automatic. With Ascend, it is thus possible to optimize conversions better and at a larger scale than before. Technically, Ascend is related to approaches to inter- active evolution (Secretan et al. 2011; Takagi 2001) and crowdsourcing (Brabham 2013; Lehman and Miikkulai- nen 2013a) in that evaluations of candidates are done online by human users. The usual interactive evolution paradigm, however, employs a relatively small number of human evaluators, and their task is to select good can- didates or evaluate the fitness of a pool of candidates ex- plicitly. In contrast in Ascend, a massive number of hu- man users are interacting with the candidates, and fitness is derived from their actions (i.e. convert or not) implicitly. The Ascend Method Ascend consists of defining the space of possible web interfaces, initializing the population with a good coverage of that space, estimating the performance of the candidates reliably, allocating traffic to candidates intelligently so that bad designs can be eliminated early, and testing candidates online in parallel. Each of these steps is described in more detail in this section. Defining the Search Space The starting point for Ascend is a search space defined by the web designer. Ascend can be configured to optimize a design of a single web-page, or a funnel consisting of multiple pages such as the landing page, selections, and a shopping cart. For each such space, the designer specifies the elements on that page and values that they can take. For instance in the landing page example of Figures 1 and 2, logo size, header image, button color, content order are such elements, and they can each take on 2-4 values. Ascend searches for good designs in the space of possible combinations of these values. This space is combinatorial, and can be very large, e.g. 1.1M in this example. Interestingly, it is exactly this combinatorial nature that makes web-page optimization a good application for evolution: Even though human designers have insight into what values to use, their combinations are difficult to predict, and need to be discovered by search process such as evolution. Initializing Evolution A typical setup is that there is already a current design for the web interface, and the goal for Ascend is to improve over its performance. That is, the current design of the web interface is designated as the Control, and improvement is measured compared to that particular design. Because fitness is evaluated with real users, exploration incurs real cost to the customer. It is therefore important that the candidates perform reasonably well throughout evolution, and especially in the beginning. If the initial population is generated randomly, many web interfaces would perform poorly. Instead, the initial population is created using the Control as a starting point: The candidates are created by changing the value of one element systematically. In a small search space, the initial population thus consists of all candidates with one difference from the control; in a large search space, the population is a sample of the set of such candidates. With such an initialization, most of the candidates perform similarly to the control. The candidates also cover the search dimensions well, thus forming a good starting point for evolution. Estimating Performance Ultimately, the fitness of a candidate is its conversion rate, that is, the ratio of people that convert to the total visitor of the web page. Because there is only a limited amount of traffic available to test each candidate, this rate is always a noisy estimate. However, it can be made more reliable in two ways: (1) by taking a Bayesian prior into account: the conversion rate is unlikely to be arbitrary, but instead is likely to be similar to those of other candidates; and (2) by estimating how likely the candidate’s conversion rate is to be better than that of the control. A prior estimate of the conversion rate can be obtained as the average of all candidates tested so far. A probability distribution of conversion rate is then built for the control and the candidate as demonstrated in Figure 3. The proportion of area under the curve of candidate conversion rate distribution where it beats that of control is computed as the probability to beat control. This probability is then used as the fitness for the candidate. While probability to beat control is a common technique in CRO (Google 2019; SplitMetrics 2019; VWO 2019), the evolutionary optimization context in Ascend makes it possible to improve it further. Instead of computing the prior based on all candidates, it can be computed based on the candidates evolutionary parents. They are most similar to the candidate, resulting in a more accurate prior, and therefore more reliable estimates. Evolutionary Process Each page is represented as a genome, as shown for two example pages in Figure 2 (left side). The usual genetic operations of crossover (re-combination of the elements in the two genomes; middle) and mutation (randomly changing one element in the offspring; right side) are then performed to create new candidates. In the current implementation, fitness-proportionate selection is used to generate offspring candidates from the current population. From the current population of $n$ candidates, another $n$ new candidates are generated in this way. Because evaluations are expensive, consuming traffic for which most customers have to pay, it is useful to minimize them during evolution. Each page needs to be tested only to the extent that it is possible to decide whether it is promising, i.e. whether it should serve as a parent in the next generation, or should be discarded. A process similar to age-layering (Hodjat and Shahrzad 2013; Shahrzad et al. 2016) is therefore used to allocate fitness evaluations. At each generation, each new candidate and each old candidate is evaluated with a small number (a maturity age) of user interactions, such as 2000. The top $n$ candidates are retained, and the bottom $n$ discarded. In this manner, bad candidates are eliminated quickly. Good candidates receive progressively more evaluations, and the confidence in their fitness estimate increases. In this process, Ascend learns which combinations of elements are effective, and gradually focuses the search around the most promising designs. It is thus sufficient to test only a tiny fraction of the search space to find the best ones, i.e. thousands of pages instead of millions or billions. Online Evolution While in simple cases (where the space of possible designs is small) such optimization can potentially be carried out by simpler mechanisms such as systematic search, hill-climbing, or reinforcement learning, the population-based approach is particularly effective because the evaluations can be done in parallel. The entire population can be tested at once, as different users interact with the site simultaneously. It is also unnecessary to test each design to statistical significance; only weak statistical evidence is sufficient to proceed in the search. In this process, thousands of page designs can be tested in a short time, which is impossible through A/B or multivariate testing. Figure 4 shows the overall architecture of the system. A population of alternative designs (center) are adapted (right) based on evaluations with actual users (left). The population of designs (center) are evaluated with many users in parallel (left). The evolutionary process (right) generates new designs and outputs the best design in the end. The system also keeps track of which design has been shown to which user, so that they get to see the same design if they return within a certain time limit (e.g. the same day). Case Study As an example of how Ascend works, let us consider a case study on optimizing the web interface for a media site that connects users to online education programs. This experiment was run in September through November 2016 on the desktop traffic of the site. For an animated demo of this experiment, see https://ai.cognizant.com/evoai/ascend-demo. The initial design for this page is shown in the left side of Figure 5. It had been hand designed using standard tools such as Optimizely. Its conversion rate during the time of the experiment was found to be 5.61%, which is typical of such web interfaces. Based on this page, the web designers came up with nine elements, with two to nine values each, resulting in 381,024 potential combinations (Figure 6). While much larger search spaces are possible, this example represents a mid-size space com- Figure 4: Overall Architecture of the Online Evolution System. The outcome of each interaction (i.e. whether the user converted or not) constitutes one evaluation of a design. Many such evaluations $ij$ are run in parallel with different users $j$ and averaged to estimate how good the design $i$ is. After all designs have been evaluated, the adaptation process discards bad designs and generates more variations of the best designs. This process of generation, testing, and selection is repeated until a sufficiently good design has been found or the time allocated for the process has been spent. The best design found so far is output as the result of the learning process. The system thus discovers good designs for web interfaces through live online testing. Figure 5: The control design and three best evolved designs. After 60 days of evolution with 599,008 user interactions, a design for the search widget was found that converted 46.6% better than the control (5.61% vs. 8.22%), as well as other good designs. Much of the improvement was based on discovering a combination of colors that draws attention to the widget and makes the call to action clear. The initial population of 37 candidates was formed by systematically replacing each of the values in the control page with one of the alternative values, as described in the Initializing Evolution section. Evolution was then run for 60 days, or four generations, altogether testing 111 candidates with 599,008 user interactions total. The estimated conversion rates of the candidates over this time are shown in Figure 7. This figure demonstrates that evolution was successful in discovering significantly better candidates than control. As an independent verification, the three top candidates in Figure 5 were then subjected to an A/B test using Optimizely. In about 6500 user interactions, the best candidate was confirmed to increase the conversion rate by 43.5% with greater than 99% significance (and the other two by 37.1% and 28.2%)—which is an excellent result given that the control was a candidate that was already hand-optimized using state-of-the-art tools. Unlike Control, the top candidates utilize bright background colors to draw attention to the widget. There is an important interaction between the background and the blue banner (whose color was fixed)—in the best two designs (in the middle) the background is distinct from the banner but not competing with it. Moreover, given the colored background, a white button with black text provided the most clear call for action. It is difficult to recognize such interactions ahead of time, yet evolution discovered them early on, and many of the later candidates built on them. Other factors such as an active call to action (i.e. “Get Started” and “Find my Program” rather than “Request Info”) amplified it further. At the time evolution was turned off, better designs were still being discovered, suggesting that a more prolonged evolution and a larger search space (e.g. including banner color and other choices) could have improved the results further. It is also interesting to note that during the experiment, the human designers referred to Ascend as “the ugly widget generator,” suggesting that its designs were different from typical human designs. Remarkably, in doing so Ascend succeeded in creating a sense of urgency that is missing from the control design (Figure 8), suggesting that Ascend can discover effective design principles of its own. Comparison to Multivariate Testing The case study and numerous other examples reviewed in the Discussion section show that evolutionary optimization in Ascend discovers effective solutions. But does it offer improvement over other automated methods such as multivariate testing, and in particular the Taguchi method? Its ability to take advantage of interactions between design variables should allow it to find better designs than Taguchi. On the other hand, if variables are indeed independent, Taguchi might be a better method. A simulation study in this section is presented to test this hypothesis; for more details, see (Jiang et al. 2018). Simulation setup In order to study this question systematically, a simulated environment was created where the degree of interactions could be controlled. In the simulation, an evaluator is first constructed to calculate a candidate’s true conversion rate based on the values it specifies for each variable. Simulated traffic is distributed to candidates and conversions are assigned probabilistically based on candidates’ true conversion rate. The observed conversion rates are then used as the scores of the candidates in Taguchi and evolution methods. By setting the parameters of the simulation differently, different kinds of evaluators, i.e. functions that determine the conversion rate $CR[c]$ of candidate $c$, can be defined. For instance, the simple linear evaluator is based on only bias $W^0$ (i.e. the control conversion rate) and weight $W^1_i(c)$ for each individual variable $i$: $$CR[c] = W^0 + \sum_{i=1}^{n} W^1_i(c).$$ \hspace{1cm} (1) The bias represents the conversion rate of the control candidate; the different choices for each variable add or Figure 7: Estimated Conversion Rates through the 60-day Online Evolution Run. Days are in the x-axis and the conversion rate on the y-axis. The dark blue dots (on top) indicate the current best candidate, the light blue dots (in the middle) an average of all currently active candidates, and the orange dots (at the bottom) the estimated performance of the control design. The shaded areas display the 95% confidence intervals (from the binomial distribution with the observed mean). The dark blue peaks indicate the start of each new generation. Such peaks emerge because during the first few days, the new candidates have been evaluated only a small number of times, and some of them have very high estimated rates through random chance. Eventually they will be evaluated in a maturity age of 2000 user interactions, and the estimates become lower and the confidence intervals narrower. The elite candidates are tested across several generations (as described in the Evolutionary Process section), resulting in very narrow intervals towards the end. Estimated conversion rates of the best candidates in later generations are significantly higher than control, suggesting that evolution is effective in discovering better candidates. Interestingly, the active population average is also higher than control, indicating that the experiment did not incur any cost in performance. \[ CR[c] = W^0 + \sum_{i=1}^{n} W_i^1(c) + \sum_{j=1}^{n} \sum_{k=j+1}^{n} W_{j,k}^2(c). \] That is, in addition to the bias and the individual variable contributions, it includes contributions \( W_{j,k}^2(c) \) for each pair of variables \( J K \). Both the Taguchi candidates and the evolution candidates are represented in the same way, as concatenations of one-hot vectors representing the values for each variable in the Taguchi method, and actions for each gene in evolution. The total traffic for the Taguchi method and evolution algorithm is set to be equal, distributed evenly to all Taguchi candidates, but differently for evolution candidates based on how many generations they survive. Eight generations of evolution were run with mutation rate 0.01 and elite percentage of 20%; the control conversion rate \( W^0 = 0.05 \). The Taguchi method While full multivariate analysis would require testing all \( K^N \) combinations of \( N \) variables with \( K \) values each, the Taguchi method specifies a small subset of combinations to test using orthogonal arrays. A Taguchi orthogonal array is a matrix where each column corresponds to a variable and each row to a candidate to test. Each value represents the setting for a given variable and ex- The dot product between any two normalized column vectors is zero. For every variable column, each value appears the same amount of times. There are multiple ways of creating orthogonal arrays (Brouwer et al. 2006; Hedayat et al. 2018) Table 1 shows an example of an orthogonal array of nine combinations, resulting from testing four variables of three values each. To compute the effect of a specific variable value, the performance scores of the candidates corresponding to combinations for that value setting are averaged. Because all values of the other variables are tested an equal amount of times in an orthogonal array, their effects cancel out, assuming each variable is independent (Hedayat et al. 2018). For example, to compute the effect of value 2 of variable 3 in Table 1, the scores of candidates 2, 4 and 9 are averaged. Similarly, for value 1, the scores of candidates 3, 5 and 7 are averaged. In a Taguchi experiment, all the candidates (rows) in the orthogonal table are tested, and the scores for candidates that share the same value for each variable are averaged in this manner. The prediction for the best-performing combination can then be constructed by selecting, for each variable, the value with the best such average score. The Taguchi method is a practical approximation of factorial testing. However, the averaging steps assume that the effects of each variable are independent, which may or may not hold in real-world experiments. In contrast, population-based search makes no such assumptions. The simulations are designed to evaluation how the two approaches compare with different amounts of traffic and degrees of interactions. <table> <thead> <tr> <th>Var 1</th> <th>Var 2</th> <th>Var 3</th> <th>Var 4</th> <th>Performance</th> </tr> </thead> <tbody> <tr> <td>Combination 1</td> <td>0</td> <td>0</td> <td>0</td> <td>0</td> </tr> <tr> <td>Combination 2</td> <td>0</td> <td>1</td> <td>2</td> <td>1</td> </tr> <tr> <td>Combination 3</td> <td>0</td> <td>2</td> <td>1</td> <td>2</td> </tr> <tr> <td>Combination 4</td> <td>1</td> <td>0</td> <td>2</td> <td>2</td> </tr> <tr> <td>Combination 5</td> <td>1</td> <td>1</td> <td>1</td> <td>0</td> </tr> <tr> <td>Combination 6</td> <td>1</td> <td>2</td> <td>0</td> <td>1</td> </tr> <tr> <td>Combination 7</td> <td>2</td> <td>0</td> <td>1</td> <td>1</td> </tr> <tr> <td>Combination 8</td> <td>2</td> <td>1</td> <td>0</td> <td>2</td> </tr> <tr> <td>Combination 9</td> <td>2</td> <td>2</td> <td>2</td> <td>0</td> </tr> </tbody> </table> Table 1: Example Taguchi array of four variables with three levels each. **Experimental Results** Three experiments were run comparing the Taguchi method with evolutionary optimization. In the first two, the goal was to find good candidates by the end of the experiment. In the first one, the variables had independent effects, and in the second, there were significant dependencies between pairs of variables. In the third experiment, the performance during the experiment was compared. The first experiment uses a linear evaluator of Equation 1 that assumes all changes are independent, and a simple genome that results in a small Taguchi array. These are the ideal conditions for the Taguchi method, and it is expected to perform well. The best settings for the Taguchi method are those with uniform numbers of values across all variables (Adobe 2018). In the experiment, four variables were used with three values each, i.e. $[3, 3, 3, 3]$, with $3^4 = 81$ combinations, resulting in nine rows in the orthogonal array (Kuhfeld 2018). In this experiment, the true conversion rate for the best evolution candidate is steady at 0.0565 at all levels of traffic from 50,000 to 10,000,000 samples. The best predicted Taguchi candidate’s true conversion rate is significantly lower, 0.0548, with low traffic, but eventually catches up as traffic increases to about 1,000,000 samples. It is also better than the best candidate in the actual Taguchi array, whose true conversion rate was approximately 0.0548 at all levels of traffic. Thus, under ideal conditions for Taguchi, both methods find equally good solutions given enough traffic. With low traffic, however, the evolutionary approach performs significantly better. The likely reason is that while in the Taguchi method the set of candidates is fixed, in evolution it is not. Evolution discards bad candidates quickly and does not spend much traffic on them; instead it generates new candidates, and thus uses the traffic on evaluating increasingly better candidates. In the second experiment, the nonlinear evaluator of Equation 2 is used to simulate interactions that are likely to exist in the real world. Also, more variables with a varying number of possible values, i.e. [3, 6, 2, 3, 6, 2, 6], was used to make the problem more realistic. Figure 9 shows that in this case, the best predicted Taguchi candidate’s true conversion rate is no longer comparable with evolution’s. Furthermore, it does not even significantly outperform its best tested candidate. Interestingly, the performance of the evolutionary algorithm is not significantly worse with interacting vs. independent variables, demonstrating its ability to adapt to complicated real-world circumstances. While the main goal in conversion optimization is to find good candidates that can be deployed after the experiment, in many cases it is also important to not decrease the site’s performance much during the experiment. Evolution continuously creates improved candidates as it learns more about the system, whereas the Taguchi method generates a single set of candidates for the entire test. Evolution therefore provides continual improvement on the site even during the experiment. This principle is evident in the results of the third experiment, using the linear evaluator of Equation 2 and the more complex genome of Figure 9. As can be seen in Figure 10. The Taguchi’s candidates’ average performance stays the same throughout the increasing traffic, whereas evolution’s candidates perform, on average, better as the experiment progresses. It therefore forms a good approach in domains where performance matters during the experiment, in particular in campaigns that run only for a limited duration. **Traffic Allocation in Noisy Domains** When the Evolutionary CRO methods were taken out of the laboratory and into the real world application, it be- came clear that there were new and interesting challenges that needed to be met. First, in the original Evolutionary CRO framework (Miikkulainen et al. 2017a, 2018), the evaluation of each candidate is performed in a static fashion: A fixed amount of traffic is allocated to each web design. This means even if a candidate is clearly bad based on a few visits, the system currently gives it the same amount of traffic as for good ones. A large amount of real traffic may be wasted by bad candidates, leading to more expensive evaluations. Second, during the normal evolutionary process, only weak statistical evidence is obtained. Therefore, there is a multiple hypotheses problem, i.e. the winner candidate is most likely not the one with the best true conversion rate, but one that got lucky with evaluations. Third, the current evolutionary CRO technique is designed to identify a good candidate at the end of optimization. However, in some scenarios, like the limited-duration campaigns of Figure 10, the goal for CRO is to make the overall conversion rate during optimization as high as possible. With uniform traffic allocation, bad candidates are tested as much as good ones, thereby reducing the overall conversion rate. These issues can be addressed with a more intelligent traffic allocation based on the Multi-Armed Bandit approach (MAB). A general such approach, MAB-EA, will be developed in this section, as well as two specific methods, one for selecting the best candidate and another for maintaining high performance in campaign mode. The effectiveness of these methods will then be evaluated in simulation. For more details, see (Qiu and Miikkulainen 2019); for animated demos of this process, see https://ai.cognizant.com/evoai/ea-2. Multi-Armed Bandit Approach The first goal is to develop a framework that allocates traffic dynamically in a more efficient way. MAB algorithms (Audibert and Bubeck 2010; Auer et al. 2002; Bubeck et al. 2009; Robbins 1952; Weber 1992) are well suited for this role. In MAB problem, a slot machine with multiple arms is given, and the gambler has to decide which arms to pull, how many times to pull each arm, and in which order to pull them, in order to maximize rewards. Each candidate web design can be regarded as an arm, and each visit to the website is equal to a pull. The reward of each visit to a single web design is assumed to follow an unknown but fixed Bernoulli distribution. The probability of getting reward 1 (the visited user is successfully converted) is $p$ and the probability of getting reward 0 (the visited user is not converted) is $1 - p$, where $p$ is the true conversion rate of that web design. Given a fixed budget of traffic (number of visits) for each generation, a Bernoulli MAB algorithm will then be invoked to allocate traffic to the current candidates. The main effect of this method, MAB-EA, is that traffic is not wasted on bad candidates. A secondary effect is that it can instead be used to evaluate most promising candidates more accurately. The proposed framework is thus expected to both reduce the amount of traffic needed and improve overall optimization performance. Two specific instantiations of the framework will be described next, the first one for identifying a single best candidate at the end of evolution, and the second for maintaining high average performance during a campaign. In the Best-Arm Identification (BAI) mode of MAB-EA, an additional BAI phase is applied after the evolution process has concluded. A MAB algorithm for pure exploration (successive rejects; (Audibert and Bubeck 2010)), will be performed on an elite archive, i.e., the collection of top candidates over all generations. A single winner will be returned after the BAI phase. Although additional traffic is needed for running the BAI phase, this cost can be compensated by extracting a small portion of traffic from each previous generation (e.g., 10%). In the Campaign mode, MAB-EA is extended with asynchronous statistics. Whereas measurements such as the total reward, average reward, number of pulls, etc. of all the arms are usually initialized to 0, in Campaign mode all candidates that survive from the previous generation preserve these measurements and use them as the initial values in the current generation. Asynchronous MAB algorithm thus allocates more traffic to the existing elites without reevaluating them from scratch, focusing more on exploitation rather than exploration, and thus improving overall conversion rate. Simulation Experiments The simulator introduced in the Taguchi comparison section was used to evaluate the effectiveness of MAB-EA. The simulated website consisted of eight elements, with varying design costs. The mean conversion rate for all possible designs is 0.04997, and the maximum is 0.08494. Three MAB algorithms: Successive Rejects (SR), Thompson Sampling (TS), and Upper Confidence Bound 1 (UCB1), were evaluated and compared with the standard uniform traffic allocation. The traffic budget for each generation is fixed at 10,000, the population size $K$ is 20, mutation probability $C_{m}$ is 0.01, and elite and parent percentages varied between 10 and 30%. First, the main observation on the basic MAB-EA runs is that TS and SR increases both the best and the overall conversion rate compared to the standard method 5-10% (the differences are statistically significant with $p < 0.05$ based on a t-test on 500 independent runs). In contrast, since the average reward in the simulated CRO case is very low (e.g., 0.05), UCB1 favors more exploration, which encourages evenly allocation of the traffic, thereby leading to similar performance as the Standard Method. When evaluating the extension of MAB-EA to best-arm identification, the basic MAB-EA methods have 11,000 visits per generation; BAI extensions have 10,000 visits per generation and 10,000 additional visits in the BAI phase. The simulation is run for 15 generations, which is typical for Ascend experiments where a best design needs to be found. As can be seen in Figure 11, the BAI mode consistently improves over the Standard Method and the basic MAB-EA methods. It both converges faster early on, and explores more efficiently later. After Generation 10, BAI mode significantly outperforms MAB-EA even with less total traffic. BAI mode thus allows selecting a better winner, and estimates its future/true performance more accurately. It therefore provides an important improvement of the standard Ascend approach. In the Campaign mode experiments, SR, TS and UCB1 are modified to run asynchronously and compared with their original versions, as well as with the Standard Method. Since Campaign mode usually runs for longer, the number of generations is set at 50. As can be seen in Figure 12, asynchronous SR and asynchronous TS perform significantly better than their original versions. For UCB1, the asynchronous version is better only in the early stages where exploration is more important. These experiments therefore demonstrate how the MAB extension of Ascend can solve three general issues in evolutionary CRO: How to allocate the evaluation budget efficiently, how to select good final candidates reliably, and how to maintain high overall conversion rate during evolution. **Development, Deployment, and Maintenance** Ascend by Evolv is a software as a service (SaaS) application of evolutionary optimization. This section summarizes the Ascend team’s experience in developing, deploying, and maintaining the software for the growing customer base. The Ascend application is organized into three components: (1) Runtime: The code deployed on a customers website to manipulate the page content and gather analytics data. (2) Editor: The application that the customer uses to configure the Ascend experiment, specifying the pages to be tested and the types of changes to be made on them. (3) Evolution: The primary optimization module that decides what content to serve on the website. Ascend was built and is maintained by a group of web developers, systems engineers, and data scientists. The team practices agile development methodologies as well as continuous deployment and integration. The team currently operates on a two-week sprint cycle, and splits backlog between the three primary components discussed above. The minimum viable product took six months to develop for a team of eight engineers (two front-end, three full-stack, two data-scientist, and one devops/pipeline engineer) and a project manager. The cost was roughly mid-level SWE cost for the region (San Francisco Bay Area). The main challenges in developing Ascend was to be able to render the changes on the webpages sufficiently fast, and minimize the CPU, bandwidth, and latency impact that this process causes on our customers websites. These difficulties were overcome with benchmarking tools, investments in latency-based routing systems, and through partnering with multiple high-performance content-delivery networks. In addition, implementation of evolutionary algorithms requires specialized knowledge in AI, and such talent is difficult to recruit and retain. In terms of lessons learned, it turned out that every website and its rendering logic presents a new potential problem (and edge case) to solve. The team needed to develop a number of diagnostic tools to be able to respond to issues quickly, as opposed to a plan for mitigating all potential issues through defensive engineering. With web applications, issues will always arise, and the best plan is to prepare for issues and have a team on call to resolve them. In terms of methods, frequentist statistics requirements such as significance with \( p < 0.05 \) are not tenable in the highly variable environment of website traffic. Alternative methods of measuring statistical validity and selecting candidates are needed, such as the multi-armed bandit methods described above, and a method based on averaging in the candidate neighborhoods (Miikkulainen et al. 2017b). Ascend is maintained by a developer operations engineering team as well as software engineers that are responsible for each of the three components of the application. Updates are released roughly once every two weeks. The domain knowledge changes moderately over time: The data science needs to be updated to keep up with the growing customer base, and web analytics and browser support will require continual updates to keep up with the developments in these industries. The application is modularized so that releases can be pushed to components of the application without interacting with the critical path where not needed. For example, evolution is built as a service and therefore can be updated without impacting the rest of the application. Changes to evolution methods can be tested in simulation based on historical data before deploying them in the application itself. **Discussion and Future Work** During its first year, Ascend was applied to numerous web interfaces across industries and search-space sizes. The results were remarkably reliable: In all cases the conversion rates were improved significantly over control, in some cases over four-fold (Table 2). Although Ascend was expected to excel in search spaces with millions of combinations, somewhat surprisingly it also finds improvements even in spaces with a few dozen combinations—suggesting that human intuition in this domain is limited, and automated methods can help. The main challenge is indeed the human element, in two ways. First, web designers, who are used to A/B and multivariate testing, often try to minimize the search space as much as possible, i.e. limit the number of elements and values, thereby not giving evolution much space to explore and discover the most powerful solutions. Second, because it often takes only a couple of generations for evolution to discover significant improvement, the designers are likely to terminate evolution early, instead of letting it optimize the designs fully. Utilizing evolutionary search as a tool requires a different kind of thinking; as designers become more familiar with it, we believe they will be able to take advantage of the full power of evolutionary search, reaching more refined results. Currently Ascend delivers one best design, or a small number of good ones, in the end as the result, again in keeping with the A/B testing tradition. In many cases there are seasonal variations and other long-term changing trends, making the performance of good designs gradually decay. It is possible to counter this problem by running the optimization again every few months. However, a new paradigm of “always-on” would be more appropriate: Evolutionary optimization can be run continuously at a low volume, keeping up with changing trends (i.e. through dynamic evolutionary optimization; (Branke 2002)). New designs can then be adopted periodically when their performance exceeds old designs significantly. Also, in some cases the customer wants to run a limited campaign, driving traffic to the site e.g. for a few weeks, after which time the web interface will no longer be needed. Instead of optimizing the final web interface design, conversions need to be optimized over all designs tested during evolution. As seen in Figure 7, the average performance of all candidates tested usually arises above the control very quickly, and Ascend can therefore already be used for campaigns as is. However, knowing that every candidate counts toward performance, traffic can be allocated more efficiently, in order to optimize campaign performance instead of future performance. <table> <thead> <tr> <th>Industry</th> <th># of values</th> <th># of elements</th> <th># of combinations</th> <th>Length of test</th> <th>CR increase %</th> </tr> </thead> <tbody> <tr> <td>Annuities</td> <td>11</td> <td>3</td> <td>48</td> <td>12 weeks</td> <td>24</td> </tr> <tr> <td>Intimacy Apparel Retailer</td> <td>15</td> <td>4</td> <td>160</td> <td>8 weeks</td> <td>38</td> </tr> <tr> <td>Flower retailer</td> <td>16</td> <td>8</td> <td>256</td> <td>8 weeks</td> <td>35</td> </tr> <tr> <td>Digital Commerce Payments</td> <td>20</td> <td>9</td> <td>1,152</td> <td>3 weeks</td> <td>9</td> </tr> <tr> <td>Web search results</td> <td>26</td> <td>10</td> <td>10,368</td> <td>6 weeks</td> <td>22</td> </tr> <tr> <td>Japanese Clothing Retailer</td> <td>30</td> <td>8</td> <td>12,800</td> <td>8 weeks</td> <td>40</td> </tr> <tr> <td>Classic Car Reseller</td> <td>30</td> <td>8</td> <td>28,800</td> <td>3 weeks</td> <td>434</td> </tr> <tr> <td>Entertainment Ecommerce</td> <td>32</td> <td>8</td> <td>77,760</td> <td>5 weeks</td> <td>50</td> </tr> <tr> <td>Comparison Shopping</td> <td>30</td> <td>8</td> <td>241,920</td> <td>9 weeks</td> <td>31</td> </tr> <tr> <td>Leading Mobile Network</td> <td>42</td> <td>9</td> <td>1,296,600</td> <td>6 weeks</td> <td>75</td> </tr> <tr> <td>Australian Beauty Retailer</td> <td>48</td> <td>13</td> <td>1,382,400</td> <td>8 weeks</td> <td>45</td> </tr> </tbody> </table> Table 2: Examples of Ascend applications across industries and search space sizes. During its first year as a commercial product, Ascend has been used to optimize a diverse set of web interfaces consistently and significantly, with typical CR gains of 20-50%, and sometimes over 400%. The multi-armed bandit methods described above are a promising approach to that end. Furthermore, currently Ascend optimizes a single design to be used with all future users of a mobile or desktop site. An interesting extension would be to take user segmentation (Yankelovich and Meer 2006) into account, and evolve different pages for different kinds of users. Moreover, such a mapping from user characterizations to page designs can be automatic: A mapping system such as a neural network can take user variables such as location, time, device, any past history with the site as inputs, and generate the vector of elements and their values as outputs. Neuroevolution (Floreano et al. 2008; Lehman and Miikkulainen 2013b) can discover optimal such mappings, in effect evolve to discover a dynamic, continuous segmentation of the user space. Users will be shown designs that are likely to convert well based on experience with other users with similar characteristics, continuously and automatically. It will be possible to analyze such evolved neural networks and discover what variables are most predictive, characterize the main user segments, and thereby develop an in-depth understanding of the opportunity. Finally, the Ascend approach is not limited to optimizing conversions. Any outcome that can be measured, such as revenue or user retention, can be optimized. The approach can also be used in a different role, such as optimizing the amount of resources spent on attracting users, such as ad placement and selection, adword bidding, and email marketing. The approach can be seen as a fundamental step in bringing machine optimization into e-commerce, and demonstrating the value of evolutionary computation in real-world problems. **Conclusion** Ascend by Evolv is the first automated system for massively multivariate conversion optimization—replacing A/B with AI. Ascend scales up interactive evolution by testing a large number of candidates in parallel on real users. Human designers specify the search space, and evolutionary optimization finds effective designs in that space, including design principles that humans tend to overlook, and interactions that current multivariate methods miss. Ascend has been applied to numerous web interfaces across industries and search space sizes and has been able to improve them consistently and significantly. In the future, it should be possible to extend it to continuous optimization, limited-time campaigns, and user segmentation as well. **References** Risto Miikkulainen is a Professor of Computer Science at the University of Texas at Austin and Associate VP of Evolutionary AI at Cognizant. His current research focuses on methods and applications of neuroevolution, as well as neural network models of natural language processing and vision. At Cognizant, and previously as CTO of Sentient Technologies, he is scaling up these approaches to real-world problems. He received an M.S. in Engineering from Helsinki University of Technology (now Aalto University) in 1986, and a Ph.D. in Computer Science from UCLA in 1990. Jonathan Epstein is Chief Strategy Officer for Evolv Technologies. Previously Chief Marketing Officer and SVP (international) at Sentient, he was intimately involved with the development and launch of Ascend. Prior to working at Evolv and Sentient, Epstein has held key executive positions at the intersection of technology and media, including president of Omek Interactive, CEO of GameSpot, CEO of Double Fusion, and SVP of IGN Entertainment. He has authored multiple patents in fields ranging from gesture control to in-game advertising to remotely operated underwater vehicles. Epstein graduated from Harvard with an AB in physical sciences. Babak Hodjat is VP of Evolutionary AI at Cognizant, and former co-founder and CEO of Sentient and a co-founder of Sentient Investment Management. He is a serial entrepreneur, having started a number of Silicon Valley companies as main inventor and technologist. Prior to co-founding Sentient, Hodjat was senior director of engineering at Sybase iAnywhere, where he led mobile solutions engineering, and a co-founder, CTO and board member of Dejima Inc. Hodjat is the primary inventor of Dejimas agent-oriented technology applied to intelligent interfaces for mobile and enterprise computing – the technology behind Apple’s Siri. He has publications and patents in numerous fields of AI, including natural language processing, machine learning, genetic algorithms, and distributed AI. He holds a PhD in Machine Intelligence from Kyushu University, in Fukuoka, Japan. Neil Iscoe was the CEO and co-founder of Digital Certainty, the company that created the original version of Ascend. After the product was sold to Sentient Technologies, he became the product’s general manager. Previously, he was the Associate VP of Research & Director of Technology Commercialization at the University of Texas, where he was responsible for creating commercialization entities and marketable products from university research. In 2011, he left the university to build Ascend. He has an MS and PhD in computer sciences, with a specialization in systems and AI, from the University of Texas. Jingbo Jiang was an intern in Sentient Technologies, working on evolution algorithm and its application on web page design, and in particular the comparisons with the Taguchi method, in Summer 2018. Her background is in machine learning, computer vision and language processing. She earned her MS in data science from University of Pennsylvania in 2019 and her BE in electrical engineering from Beihang University, China in 2017. Sam Nazari is the VP of Customer Success at Evolv.AI. He leads the global team that works closely with clients to help them understand and integrate AI across their enterprises (from large Fortune 500 companies to medium-sized business spanning multiple verticals). From explaining best use cases for AI in marketing, through to how to prepare their data, design and implement the technology, seek compliance with their IT teams, to the successful rollout of AI to help drive revenue. Nazari has a BS in Computer & Electrical Engineering from the University of Utah. Xin Qiu is a Senior Research Scientist at Cognizant and previously at Sentient. His research interests include evolutionary computation, probabilistic modeling, bandit algorithms and reinforcement learning. He earned his PhD from National University of Singapore in 2016 and his BE from Nanjing University in 2012. Michael Scharff is the Chief Executive Officer for Evolv Technologies, the AI firm behind the Ascend autonomous website optimization platform. Scharff brings over two decades of digital commerce and retail experience; with leadership roles at some of the most well known retailers in the US including Sears Canada, Toys R Us, Staples and Best Buy. He has a wealth of experience in all aspects of retailing and across numerous industry verticals and channels. Scharff has built and managed highly successful omni-channel and global eCommerce businesses, led teams in merchandising, digital marketing, innovation and other functional areas. Cory Schoolland has over a decade of experience heading marketing design efforts for San Francisco-based SaaS companies including RichRelevance, Sentient Technologies, and Evolv Technologies. He believes in the power of good design to improve our lives, and enjoys combining words, shapes, images, and colors to tell a story, or taking existing content and making it beautiful. When not pushing colorful pixels around a computer screen, Schoolland can often be found re-creating classic cocktails from the 30s and 40s, reading up on cocktail history, or slowly savoring an artisanal rum. Rob Severn is the Director of Product at the Evolv. His is responsible for understanding the optimization market’s problems and needs to better guide Evolv’s optimization product. Severn received a BA in Mathematics from Cambridge University in 2006. Aaron Shagrin has been working with technology companies, large and small, for over 20 years. He has been a part of multiple startups, Fortune 500 companies, and private equity firms. He has a deep background in product management and strategy, acquisitions, alliances, and business development & sales. He has a Bachelors of Business Administration from The University of Texas.
{"Source-Url": "http://www.cs.utexas.edu/users/ai-lab/downloadPublication.php?filename=http%3A%2F%2Fnn.cs.utexas.edu%2Fdownloads%2Fpapers%2Fmiikkulainen-aimag19.pdf&pubid=127751", "len_cl100k_base": 12101, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 46997, "total-output-tokens": 14505, "length": "2e13", "weborganizer": {"__label__adult": 0.0004596710205078125, "__label__art_design": 0.0033855438232421875, "__label__crime_law": 0.0004198551177978515, "__label__education_jobs": 0.003177642822265625, "__label__entertainment": 0.0004589557647705078, "__label__fashion_beauty": 0.0004429817199707031, "__label__finance_business": 0.00868988037109375, "__label__food_dining": 0.0004379749298095703, "__label__games": 0.0017919540405273438, "__label__hardware": 0.0021953582763671875, "__label__health": 0.0007309913635253906, "__label__history": 0.0006723403930664062, "__label__home_hobbies": 0.0002682209014892578, "__label__industrial": 0.0011987686157226562, "__label__literature": 0.00061798095703125, "__label__politics": 0.0003633499145507813, "__label__religion": 0.0005316734313964844, "__label__science_tech": 0.40625, "__label__social_life": 0.00015211105346679688, "__label__software": 0.0780029296875, "__label__software_dev": 0.48828125, "__label__sports_fitness": 0.00033020973205566406, "__label__transportation": 0.0007758140563964844, "__label__travel": 0.00033354759216308594}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 61850, 0.02665]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 61850, 0.29737]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 61850, 0.92091]], "google_gemma-3-12b-it_contains_pii": [[0, 5255, false], [5255, 10852, null], [10852, 14497, null], [14497, 18744, null], [18744, 20729, null], [20729, 23962, null], [23962, 26610, null], [26610, 29867, null], [29867, 32712, null], [32712, 38304, null], [38304, 40770, null], [40770, 46369, null], [46369, 51455, null], [51455, 55981, null], [55981, 61680, null], [61680, 61850, null]], "google_gemma-3-12b-it_is_public_document": [[0, 5255, true], [5255, 10852, null], [10852, 14497, null], [14497, 18744, null], [18744, 20729, null], [20729, 23962, null], [23962, 26610, null], [26610, 29867, null], [29867, 32712, null], [32712, 38304, null], [38304, 40770, null], [40770, 46369, null], [46369, 51455, null], [51455, 55981, null], [55981, 61680, null], [61680, 61850, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 61850, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 61850, null]], "pdf_page_numbers": [[0, 5255, 1], [5255, 10852, 2], [10852, 14497, 3], [14497, 18744, 4], [18744, 20729, 5], [20729, 23962, 6], [23962, 26610, 7], [26610, 29867, 8], [29867, 32712, 9], [32712, 38304, 10], [38304, 40770, 11], [40770, 46369, 12], [46369, 51455, 13], [51455, 55981, 14], [55981, 61680, 15], [61680, 61850, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 61850, 0.08664]]}
olmocr_science_pdfs
2024-11-28
2024-11-28
7f2833c9be64458fe531ca3966d9de74e861a216
[REMOVED]
{"Source-Url": "https://hal.archives-ouvertes.fr/hal-01257303/document", "len_cl100k_base": 9015, "olmocr-version": "0.1.50", "pdf-total-pages": 16, "total-fallback-pages": 0, "total-input-tokens": 43468, "total-output-tokens": 10145, "length": "2e13", "weborganizer": {"__label__adult": 0.0005588531494140625, "__label__art_design": 0.0006895065307617188, "__label__crime_law": 0.000545501708984375, "__label__education_jobs": 0.0006260871887207031, "__label__entertainment": 0.0001169443130493164, "__label__fashion_beauty": 0.0002682209014892578, "__label__finance_business": 0.000324249267578125, "__label__food_dining": 0.0005474090576171875, "__label__games": 0.0012264251708984375, "__label__hardware": 0.00991058349609375, "__label__health": 0.0007052421569824219, "__label__history": 0.0005517005920410156, "__label__home_hobbies": 0.00020503997802734375, "__label__industrial": 0.0012950897216796875, "__label__literature": 0.00028204917907714844, "__label__politics": 0.0004658699035644531, "__label__religion": 0.0009312629699707032, "__label__science_tech": 0.1593017578125, "__label__social_life": 7.778406143188477e-05, "__label__software": 0.00675201416015625, "__label__software_dev": 0.81201171875, "__label__sports_fitness": 0.0005593299865722656, "__label__transportation": 0.001476287841796875, "__label__travel": 0.0003254413604736328}, "weborganizer_max": "__label__software_dev", "avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_v1__avg_fraction_numbers_in_line_ratio": [[0, 42933, 0.02751]], "fineweb_edu_fasttext_gt2__fineweb_edu_fasttext_gt2__score": [[0, 42933, 0.39283]], "ft_lang_id_en_doc_v2__ft_lang_id_en_doc_v2__en": [[0, 42933, 0.8821]], "google_gemma-3-12b-it_contains_pii": [[0, 1132, false], [1132, 3900, null], [3900, 7081, null], [7081, 9846, null], [9846, 13003, null], [13003, 15727, null], [15727, 18711, null], [18711, 21835, null], [21835, 25608, null], [25608, 28379, null], [28379, 31534, null], [31534, 34456, null], [34456, 35796, null], [35796, 36387, null], [36387, 39708, null], [39708, 42933, null]], "google_gemma-3-12b-it_is_public_document": [[0, 1132, true], [1132, 3900, null], [3900, 7081, null], [7081, 9846, null], [9846, 13003, null], [13003, 15727, null], [15727, 18711, null], [18711, 21835, null], [21835, 25608, null], [25608, 28379, null], [28379, 31534, null], [31534, 34456, null], [34456, 35796, null], [35796, 36387, null], [36387, 39708, null], [39708, 42933, null]], "google_gemma-3-4b-it_v2tag__is_academic_paper": [[0, 5000, true], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_class_syllabus": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_completion_certificate": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_court_notice": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_homework_assignment": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_news_article": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_public_order": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_resume_cv": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_test_or_quiz": [[0, 5000, false], [5000, 42933, null]], "google_gemma-3-4b-it_v2tag__is_textbook": [[0, 5000, false], [5000, 42933, null]], "pdf_page_numbers": [[0, 1132, 1], [1132, 3900, 2], [3900, 7081, 3], [7081, 9846, 4], [9846, 13003, 5], [13003, 15727, 6], [15727, 18711, 7], [18711, 21835, 8], [21835, 25608, 9], [25608, 28379, 10], [28379, 31534, 11], [31534, 34456, 12], [34456, 35796, 13], [35796, 36387, 14], [36387, 39708, 15], [39708, 42933, 16]], "pipe_delimited_lines_v1__pipe_delimited_lines_v1__pipe_delimited_lines_ratio": [[0, 42933, 0.09504]]}
olmocr_science_pdfs
2024-11-30
2024-11-30